00:00:00.000 Started by upstream project "autotest-nightly" build number 4276 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3639 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.194 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.196 The recommended git tool is: git 00:00:00.196 using credential 00000000-0000-0000-0000-000000000002 00:00:00.201 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.233 Fetching changes from the remote Git repository 00:00:00.234 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.275 Using shallow fetch with depth 1 00:00:00.275 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.275 > git --version # timeout=10 00:00:00.302 > git --version # 'git version 2.39.2' 00:00:00.302 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.320 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.320 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.159 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.170 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.182 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:08.182 > git config core.sparsecheckout # timeout=10 00:00:08.195 > git read-tree -mu HEAD # timeout=10 00:00:08.210 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:08.228 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:08.228 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:08.330 [Pipeline] Start of Pipeline 00:00:08.342 [Pipeline] library 00:00:08.343 Loading library shm_lib@master 00:00:08.343 Library shm_lib@master is cached. Copying from home. 00:00:08.359 [Pipeline] node 00:00:08.373 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:08.374 [Pipeline] { 00:00:08.382 [Pipeline] catchError 00:00:08.383 [Pipeline] { 00:00:08.394 [Pipeline] wrap 00:00:08.401 [Pipeline] { 00:00:08.407 [Pipeline] stage 00:00:08.408 [Pipeline] { (Prologue) 00:00:08.593 [Pipeline] sh 00:00:08.877 + logger -p user.info -t JENKINS-CI 00:00:08.896 [Pipeline] echo 00:00:08.897 Node: WFP21 00:00:08.906 [Pipeline] sh 00:00:09.204 [Pipeline] setCustomBuildProperty 00:00:09.217 [Pipeline] echo 00:00:09.219 Cleanup processes 00:00:09.224 [Pipeline] sh 00:00:09.513 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.513 3031513 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.526 [Pipeline] sh 00:00:09.811 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:09.811 ++ grep -v 'sudo pgrep' 00:00:09.811 ++ awk '{print $1}' 00:00:09.811 + sudo kill -9 00:00:09.811 + true 00:00:09.826 [Pipeline] cleanWs 00:00:09.836 [WS-CLEANUP] Deleting project workspace... 00:00:09.836 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.842 [WS-CLEANUP] done 00:00:09.847 [Pipeline] setCustomBuildProperty 00:00:09.862 [Pipeline] sh 00:00:10.145 + sudo git config --global --replace-all safe.directory '*' 00:00:10.261 [Pipeline] httpRequest 00:00:10.795 [Pipeline] echo 00:00:10.797 Sorcerer 10.211.164.20 is alive 00:00:10.808 [Pipeline] retry 00:00:10.811 [Pipeline] { 00:00:10.826 [Pipeline] httpRequest 00:00:10.830 HttpMethod: GET 00:00:10.831 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.832 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.856 Response Code: HTTP/1.1 200 OK 00:00:10.857 Success: Status code 200 is in the accepted range: 200,404 00:00:10.857 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:32.704 [Pipeline] } 00:00:32.721 [Pipeline] // retry 00:00:32.729 [Pipeline] sh 00:00:33.012 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:33.028 [Pipeline] httpRequest 00:00:33.477 [Pipeline] echo 00:00:33.479 Sorcerer 10.211.164.20 is alive 00:00:33.489 [Pipeline] retry 00:00:33.491 [Pipeline] { 00:00:33.505 [Pipeline] httpRequest 00:00:33.510 HttpMethod: GET 00:00:33.510 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:33.511 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:33.524 Response Code: HTTP/1.1 200 OK 00:00:33.524 Success: Status code 200 is in the accepted range: 200,404 00:00:33.524 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:29.379 [Pipeline] } 00:01:29.396 [Pipeline] // retry 00:01:29.403 [Pipeline] sh 00:01:29.687 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:32.235 [Pipeline] sh 00:01:32.520 + git -C spdk log --oneline -n5 00:01:32.520 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:32.520 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:32.520 4bcab9fb9 correct kick for CQ full case 00:01:32.520 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:32.520 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:32.532 [Pipeline] } 00:01:32.547 [Pipeline] // stage 00:01:32.557 [Pipeline] stage 00:01:32.559 [Pipeline] { (Prepare) 00:01:32.576 [Pipeline] writeFile 00:01:32.591 [Pipeline] sh 00:01:32.875 + logger -p user.info -t JENKINS-CI 00:01:32.889 [Pipeline] sh 00:01:33.177 + logger -p user.info -t JENKINS-CI 00:01:33.189 [Pipeline] sh 00:01:33.475 + cat autorun-spdk.conf 00:01:33.475 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.475 SPDK_TEST_NVMF=1 00:01:33.475 SPDK_TEST_NVME_CLI=1 00:01:33.475 SPDK_TEST_NVMF_NICS=mlx5 00:01:33.475 SPDK_RUN_ASAN=1 00:01:33.475 SPDK_RUN_UBSAN=1 00:01:33.475 NET_TYPE=phy 00:01:33.483 RUN_NIGHTLY=1 00:01:33.487 [Pipeline] readFile 00:01:33.512 [Pipeline] withEnv 00:01:33.514 [Pipeline] { 00:01:33.526 [Pipeline] sh 00:01:33.812 + set -ex 00:01:33.812 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:33.812 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:33.812 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.812 ++ SPDK_TEST_NVMF=1 00:01:33.812 ++ SPDK_TEST_NVME_CLI=1 00:01:33.812 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:33.812 ++ SPDK_RUN_ASAN=1 00:01:33.812 ++ SPDK_RUN_UBSAN=1 00:01:33.812 ++ NET_TYPE=phy 00:01:33.812 ++ RUN_NIGHTLY=1 00:01:33.812 + case $SPDK_TEST_NVMF_NICS in 00:01:33.812 + DRIVERS=mlx5_ib 00:01:33.812 + [[ -n mlx5_ib ]] 00:01:33.813 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:33.813 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:40.384 rmmod: ERROR: Module irdma is not currently loaded 00:01:40.384 rmmod: ERROR: Module i40iw is not currently loaded 00:01:40.384 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:40.384 + true 00:01:40.384 + for D in $DRIVERS 00:01:40.384 + sudo modprobe mlx5_ib 00:01:40.384 + exit 0 00:01:40.393 [Pipeline] } 00:01:40.405 [Pipeline] // withEnv 00:01:40.412 [Pipeline] } 00:01:40.428 [Pipeline] // stage 00:01:40.437 [Pipeline] catchError 00:01:40.438 [Pipeline] { 00:01:40.453 [Pipeline] timeout 00:01:40.454 Timeout set to expire in 1 hr 0 min 00:01:40.456 [Pipeline] { 00:01:40.470 [Pipeline] stage 00:01:40.473 [Pipeline] { (Tests) 00:01:40.488 [Pipeline] sh 00:01:40.776 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:40.776 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:40.776 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:40.776 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:40.776 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:40.776 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:40.776 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:40.776 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:40.776 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:40.776 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:40.776 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:40.776 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:40.776 + source /etc/os-release 00:01:40.776 ++ NAME='Fedora Linux' 00:01:40.776 ++ VERSION='39 (Cloud Edition)' 00:01:40.776 ++ ID=fedora 00:01:40.776 ++ VERSION_ID=39 00:01:40.776 ++ VERSION_CODENAME= 00:01:40.776 ++ PLATFORM_ID=platform:f39 00:01:40.776 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:40.776 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:40.776 ++ LOGO=fedora-logo-icon 00:01:40.776 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:40.776 ++ HOME_URL=https://fedoraproject.org/ 00:01:40.776 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:40.776 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:40.776 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:40.776 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:40.776 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:40.776 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:40.776 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:40.776 ++ SUPPORT_END=2024-11-12 00:01:40.776 ++ VARIANT='Cloud Edition' 00:01:40.776 ++ VARIANT_ID=cloud 00:01:40.776 + uname -a 00:01:40.776 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:40.776 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:44.064 Hugepages 00:01:44.064 node hugesize free / total 00:01:44.064 node0 1048576kB 0 / 0 00:01:44.064 node0 2048kB 0 / 0 00:01:44.064 node1 1048576kB 0 / 0 00:01:44.064 node1 2048kB 0 / 0 00:01:44.064 00:01:44.064 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:44.064 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:44.064 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:44.064 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:44.064 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:44.064 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:44.064 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:44.064 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:44.064 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:44.064 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:44.064 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:44.064 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:44.065 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:44.065 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:44.065 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:44.065 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:44.065 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:44.065 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:44.065 + rm -f /tmp/spdk-ld-path 00:01:44.065 + source autorun-spdk.conf 00:01:44.065 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.065 ++ SPDK_TEST_NVMF=1 00:01:44.065 ++ SPDK_TEST_NVME_CLI=1 00:01:44.065 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:44.065 ++ SPDK_RUN_ASAN=1 00:01:44.065 ++ SPDK_RUN_UBSAN=1 00:01:44.065 ++ NET_TYPE=phy 00:01:44.065 ++ RUN_NIGHTLY=1 00:01:44.065 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:44.065 + [[ -n '' ]] 00:01:44.065 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:44.065 + for M in /var/spdk/build-*-manifest.txt 00:01:44.065 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:44.065 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:44.065 + for M in /var/spdk/build-*-manifest.txt 00:01:44.065 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:44.065 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:44.065 + for M in /var/spdk/build-*-manifest.txt 00:01:44.065 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:44.065 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:44.065 ++ uname 00:01:44.065 + [[ Linux == \L\i\n\u\x ]] 00:01:44.065 + sudo dmesg -T 00:01:44.065 + sudo dmesg --clear 00:01:44.065 + dmesg_pid=3032458 00:01:44.065 + [[ Fedora Linux == FreeBSD ]] 00:01:44.065 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.065 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.065 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:44.065 + [[ -x /usr/src/fio-static/fio ]] 00:01:44.065 + export FIO_BIN=/usr/src/fio-static/fio 00:01:44.065 + FIO_BIN=/usr/src/fio-static/fio 00:01:44.065 + sudo dmesg -Tw 00:01:44.065 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:44.065 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:44.065 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:44.065 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.065 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.065 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:44.065 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.065 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.065 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:44.065 15:38:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:44.065 15:38:29 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:44.065 15:38:29 -- nvmf-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.065 15:38:29 -- nvmf-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:44.065 15:38:29 -- nvmf-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:44.065 15:38:29 -- nvmf-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_NICS=mlx5 00:01:44.065 15:38:29 -- nvmf-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:44.065 15:38:29 -- nvmf-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:44.065 15:38:29 -- nvmf-phy-autotest/autorun-spdk.conf@7 -- $ NET_TYPE=phy 00:01:44.065 15:38:29 -- nvmf-phy-autotest/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:01:44.065 15:38:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:44.065 15:38:29 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:44.065 15:38:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:44.065 15:38:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:44.065 15:38:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:44.065 15:38:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:44.065 15:38:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.065 15:38:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.065 15:38:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.065 15:38:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.065 15:38:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.065 15:38:29 -- paths/export.sh@5 -- $ export PATH 00:01:44.065 15:38:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.065 15:38:29 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:44.065 15:38:29 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:44.065 15:38:29 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731854309.XXXXXX 00:01:44.065 15:38:29 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731854309.X4HVxa 00:01:44.065 15:38:29 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:44.065 15:38:29 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:44.065 15:38:29 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:44.065 15:38:29 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:44.065 15:38:29 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:44.065 15:38:29 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:44.065 15:38:29 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:44.065 15:38:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.065 15:38:29 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:44.065 15:38:29 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:44.065 15:38:29 -- pm/common@17 -- $ local monitor 00:01:44.065 15:38:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.065 15:38:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.065 15:38:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.065 15:38:29 -- pm/common@21 -- $ date +%s 00:01:44.065 15:38:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.065 15:38:29 -- pm/common@21 -- $ date +%s 00:01:44.065 15:38:29 -- pm/common@25 -- $ sleep 1 00:01:44.065 15:38:29 -- pm/common@21 -- $ date +%s 00:01:44.065 15:38:29 -- pm/common@21 -- $ date +%s 00:01:44.065 15:38:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731854309 00:01:44.065 15:38:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731854309 00:01:44.065 15:38:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731854309 00:01:44.065 15:38:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731854309 00:01:44.325 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731854309_collect-cpu-load.pm.log 00:01:44.325 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731854309_collect-vmstat.pm.log 00:01:44.325 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731854309_collect-cpu-temp.pm.log 00:01:44.325 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731854309_collect-bmc-pm.bmc.pm.log 00:01:45.261 15:38:30 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:45.262 15:38:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:45.262 15:38:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:45.262 15:38:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:45.262 15:38:30 -- spdk/autobuild.sh@16 -- $ date -u 00:01:45.262 Sun Nov 17 02:38:30 PM UTC 2024 00:01:45.262 15:38:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:45.262 v25.01-pre-189-g83e8405e4 00:01:45.262 15:38:30 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:45.262 15:38:30 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:45.262 15:38:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:45.262 15:38:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:45.262 15:38:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.262 ************************************ 00:01:45.262 START TEST asan 00:01:45.262 ************************************ 00:01:45.262 15:38:30 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:45.262 using asan 00:01:45.262 00:01:45.262 real 0m0.001s 00:01:45.262 user 0m0.000s 00:01:45.262 sys 0m0.001s 00:01:45.262 15:38:30 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:45.262 15:38:30 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:45.262 ************************************ 00:01:45.262 END TEST asan 00:01:45.262 ************************************ 00:01:45.262 15:38:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:45.262 15:38:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:45.262 15:38:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:45.262 15:38:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:45.262 15:38:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.262 ************************************ 00:01:45.262 START TEST ubsan 00:01:45.262 ************************************ 00:01:45.262 15:38:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:45.262 using ubsan 00:01:45.262 00:01:45.262 real 0m0.000s 00:01:45.262 user 0m0.000s 00:01:45.262 sys 0m0.000s 00:01:45.262 15:38:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:45.262 15:38:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:45.262 ************************************ 00:01:45.262 END TEST ubsan 00:01:45.262 ************************************ 00:01:45.262 15:38:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:45.262 15:38:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:45.262 15:38:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:45.262 15:38:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:45.262 15:38:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:45.262 15:38:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:45.262 15:38:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:45.262 15:38:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:45.262 15:38:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:45.521 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:45.521 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:45.781 Using 'verbs' RDMA provider 00:01:58.922 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:13.815 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:13.815 Creating mk/config.mk...done. 00:02:13.815 Creating mk/cc.flags.mk...done. 00:02:13.815 Type 'make' to build. 00:02:13.815 15:38:57 -- spdk/autobuild.sh@70 -- $ run_test make make -j112 00:02:13.815 15:38:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:13.815 15:38:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:13.815 15:38:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:13.815 ************************************ 00:02:13.815 START TEST make 00:02:13.815 ************************************ 00:02:13.815 15:38:57 make -- common/autotest_common.sh@1129 -- $ make -j112 00:02:13.815 make[1]: Nothing to be done for 'all'. 00:02:21.931 The Meson build system 00:02:21.931 Version: 1.5.0 00:02:21.931 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:21.931 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:21.931 Build type: native build 00:02:21.931 Program cat found: YES (/usr/bin/cat) 00:02:21.931 Project name: DPDK 00:02:21.931 Project version: 24.03.0 00:02:21.931 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.931 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.931 Host machine cpu family: x86_64 00:02:21.931 Host machine cpu: x86_64 00:02:21.931 Message: ## Building in Developer Mode ## 00:02:21.931 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:21.931 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:21.931 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:21.931 Program python3 found: YES (/usr/bin/python3) 00:02:21.931 Program cat found: YES (/usr/bin/cat) 00:02:21.931 Compiler for C supports arguments -march=native: YES 00:02:21.931 Checking for size of "void *" : 8 00:02:21.931 Checking for size of "void *" : 8 (cached) 00:02:21.931 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:21.931 Library m found: YES 00:02:21.931 Library numa found: YES 00:02:21.931 Has header "numaif.h" : YES 00:02:21.931 Library fdt found: NO 00:02:21.931 Library execinfo found: NO 00:02:21.931 Has header "execinfo.h" : YES 00:02:21.931 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.931 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:21.931 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:21.931 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:21.931 Run-time dependency openssl found: YES 3.1.1 00:02:21.931 Run-time dependency libpcap found: YES 1.10.4 00:02:21.931 Has header "pcap.h" with dependency libpcap: YES 00:02:21.931 Compiler for C supports arguments -Wcast-qual: YES 00:02:21.931 Compiler for C supports arguments -Wdeprecated: YES 00:02:21.931 Compiler for C supports arguments -Wformat: YES 00:02:21.931 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:21.931 Compiler for C supports arguments -Wformat-security: NO 00:02:21.931 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.931 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:21.931 Compiler for C supports arguments -Wnested-externs: YES 00:02:21.931 Compiler for C supports arguments -Wold-style-definition: YES 00:02:21.931 Compiler for C supports arguments -Wpointer-arith: YES 00:02:21.931 Compiler for C supports arguments -Wsign-compare: YES 00:02:21.931 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:21.931 Compiler for C supports arguments -Wundef: YES 00:02:21.931 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.931 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:21.931 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:21.931 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.931 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:21.931 Program objdump found: YES (/usr/bin/objdump) 00:02:21.931 Compiler for C supports arguments -mavx512f: YES 00:02:21.931 Checking if "AVX512 checking" compiles: YES 00:02:21.931 Fetching value of define "__SSE4_2__" : 1 00:02:21.931 Fetching value of define "__AES__" : 1 00:02:21.931 Fetching value of define "__AVX__" : 1 00:02:21.931 Fetching value of define "__AVX2__" : 1 00:02:21.931 Fetching value of define "__AVX512BW__" : 1 00:02:21.931 Fetching value of define "__AVX512CD__" : 1 00:02:21.931 Fetching value of define "__AVX512DQ__" : 1 00:02:21.931 Fetching value of define "__AVX512F__" : 1 00:02:21.931 Fetching value of define "__AVX512VL__" : 1 00:02:21.931 Fetching value of define "__PCLMUL__" : 1 00:02:21.931 Fetching value of define "__RDRND__" : 1 00:02:21.931 Fetching value of define "__RDSEED__" : 1 00:02:21.931 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:21.931 Fetching value of define "__znver1__" : (undefined) 00:02:21.931 Fetching value of define "__znver2__" : (undefined) 00:02:21.931 Fetching value of define "__znver3__" : (undefined) 00:02:21.931 Fetching value of define "__znver4__" : (undefined) 00:02:21.931 Library asan found: YES 00:02:21.931 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:21.931 Message: lib/log: Defining dependency "log" 00:02:21.931 Message: lib/kvargs: Defining dependency "kvargs" 00:02:21.931 Message: lib/telemetry: Defining dependency "telemetry" 00:02:21.931 Library rt found: YES 00:02:21.931 Checking for function "getentropy" : NO 00:02:21.931 Message: lib/eal: Defining dependency "eal" 00:02:21.931 Message: lib/ring: Defining dependency "ring" 00:02:21.931 Message: lib/rcu: Defining dependency "rcu" 00:02:21.931 Message: lib/mempool: Defining dependency "mempool" 00:02:21.931 Message: lib/mbuf: Defining dependency "mbuf" 00:02:21.931 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:21.931 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:21.931 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:21.931 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:21.931 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:21.931 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:21.931 Compiler for C supports arguments -mpclmul: YES 00:02:21.931 Compiler for C supports arguments -maes: YES 00:02:21.931 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.931 Compiler for C supports arguments -mavx512bw: YES 00:02:21.931 Compiler for C supports arguments -mavx512dq: YES 00:02:21.931 Compiler for C supports arguments -mavx512vl: YES 00:02:21.931 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:21.931 Compiler for C supports arguments -mavx2: YES 00:02:21.931 Compiler for C supports arguments -mavx: YES 00:02:21.931 Message: lib/net: Defining dependency "net" 00:02:21.931 Message: lib/meter: Defining dependency "meter" 00:02:21.931 Message: lib/ethdev: Defining dependency "ethdev" 00:02:21.931 Message: lib/pci: Defining dependency "pci" 00:02:21.931 Message: lib/cmdline: Defining dependency "cmdline" 00:02:21.931 Message: lib/hash: Defining dependency "hash" 00:02:21.931 Message: lib/timer: Defining dependency "timer" 00:02:21.931 Message: lib/compressdev: Defining dependency "compressdev" 00:02:21.931 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:21.931 Message: lib/dmadev: Defining dependency "dmadev" 00:02:21.931 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:21.931 Message: lib/power: Defining dependency "power" 00:02:21.931 Message: lib/reorder: Defining dependency "reorder" 00:02:21.931 Message: lib/security: Defining dependency "security" 00:02:21.931 Has header "linux/userfaultfd.h" : YES 00:02:21.931 Has header "linux/vduse.h" : YES 00:02:21.931 Message: lib/vhost: Defining dependency "vhost" 00:02:21.931 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.931 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.931 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.931 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.931 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:21.931 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:21.931 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:21.931 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:21.931 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:21.931 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:21.931 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:21.931 Configuring doxy-api-html.conf using configuration 00:02:21.931 Configuring doxy-api-man.conf using configuration 00:02:21.932 Program mandb found: YES (/usr/bin/mandb) 00:02:21.932 Program sphinx-build found: NO 00:02:21.932 Configuring rte_build_config.h using configuration 00:02:21.932 Message: 00:02:21.932 ================= 00:02:21.932 Applications Enabled 00:02:21.932 ================= 00:02:21.932 00:02:21.932 apps: 00:02:21.932 00:02:21.932 00:02:21.932 Message: 00:02:21.932 ================= 00:02:21.932 Libraries Enabled 00:02:21.932 ================= 00:02:21.932 00:02:21.932 libs: 00:02:21.932 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.932 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:21.932 cryptodev, dmadev, power, reorder, security, vhost, 00:02:21.932 00:02:21.932 Message: 00:02:21.932 =============== 00:02:21.932 Drivers Enabled 00:02:21.932 =============== 00:02:21.932 00:02:21.932 common: 00:02:21.932 00:02:21.932 bus: 00:02:21.932 pci, vdev, 00:02:21.932 mempool: 00:02:21.932 ring, 00:02:21.932 dma: 00:02:21.932 00:02:21.932 net: 00:02:21.932 00:02:21.932 crypto: 00:02:21.932 00:02:21.932 compress: 00:02:21.932 00:02:21.932 vdpa: 00:02:21.932 00:02:21.932 00:02:21.932 Message: 00:02:21.932 ================= 00:02:21.932 Content Skipped 00:02:21.932 ================= 00:02:21.932 00:02:21.932 apps: 00:02:21.932 dumpcap: explicitly disabled via build config 00:02:21.932 graph: explicitly disabled via build config 00:02:21.932 pdump: explicitly disabled via build config 00:02:21.932 proc-info: explicitly disabled via build config 00:02:21.932 test-acl: explicitly disabled via build config 00:02:21.932 test-bbdev: explicitly disabled via build config 00:02:21.932 test-cmdline: explicitly disabled via build config 00:02:21.932 test-compress-perf: explicitly disabled via build config 00:02:21.932 test-crypto-perf: explicitly disabled via build config 00:02:21.932 test-dma-perf: explicitly disabled via build config 00:02:21.932 test-eventdev: explicitly disabled via build config 00:02:21.932 test-fib: explicitly disabled via build config 00:02:21.932 test-flow-perf: explicitly disabled via build config 00:02:21.932 test-gpudev: explicitly disabled via build config 00:02:21.932 test-mldev: explicitly disabled via build config 00:02:21.932 test-pipeline: explicitly disabled via build config 00:02:21.932 test-pmd: explicitly disabled via build config 00:02:21.932 test-regex: explicitly disabled via build config 00:02:21.932 test-sad: explicitly disabled via build config 00:02:21.932 test-security-perf: explicitly disabled via build config 00:02:21.932 00:02:21.932 libs: 00:02:21.932 argparse: explicitly disabled via build config 00:02:21.932 metrics: explicitly disabled via build config 00:02:21.932 acl: explicitly disabled via build config 00:02:21.932 bbdev: explicitly disabled via build config 00:02:21.932 bitratestats: explicitly disabled via build config 00:02:21.932 bpf: explicitly disabled via build config 00:02:21.932 cfgfile: explicitly disabled via build config 00:02:21.932 distributor: explicitly disabled via build config 00:02:21.932 efd: explicitly disabled via build config 00:02:21.932 eventdev: explicitly disabled via build config 00:02:21.932 dispatcher: explicitly disabled via build config 00:02:21.932 gpudev: explicitly disabled via build config 00:02:21.932 gro: explicitly disabled via build config 00:02:21.932 gso: explicitly disabled via build config 00:02:21.932 ip_frag: explicitly disabled via build config 00:02:21.932 jobstats: explicitly disabled via build config 00:02:21.932 latencystats: explicitly disabled via build config 00:02:21.932 lpm: explicitly disabled via build config 00:02:21.932 member: explicitly disabled via build config 00:02:21.932 pcapng: explicitly disabled via build config 00:02:21.932 rawdev: explicitly disabled via build config 00:02:21.932 regexdev: explicitly disabled via build config 00:02:21.932 mldev: explicitly disabled via build config 00:02:21.932 rib: explicitly disabled via build config 00:02:21.932 sched: explicitly disabled via build config 00:02:21.932 stack: explicitly disabled via build config 00:02:21.932 ipsec: explicitly disabled via build config 00:02:21.932 pdcp: explicitly disabled via build config 00:02:21.932 fib: explicitly disabled via build config 00:02:21.932 port: explicitly disabled via build config 00:02:21.932 pdump: explicitly disabled via build config 00:02:21.932 table: explicitly disabled via build config 00:02:21.932 pipeline: explicitly disabled via build config 00:02:21.932 graph: explicitly disabled via build config 00:02:21.932 node: explicitly disabled via build config 00:02:21.932 00:02:21.932 drivers: 00:02:21.932 common/cpt: not in enabled drivers build config 00:02:21.932 common/dpaax: not in enabled drivers build config 00:02:21.932 common/iavf: not in enabled drivers build config 00:02:21.932 common/idpf: not in enabled drivers build config 00:02:21.932 common/ionic: not in enabled drivers build config 00:02:21.932 common/mvep: not in enabled drivers build config 00:02:21.932 common/octeontx: not in enabled drivers build config 00:02:21.932 bus/auxiliary: not in enabled drivers build config 00:02:21.932 bus/cdx: not in enabled drivers build config 00:02:21.932 bus/dpaa: not in enabled drivers build config 00:02:21.932 bus/fslmc: not in enabled drivers build config 00:02:21.932 bus/ifpga: not in enabled drivers build config 00:02:21.932 bus/platform: not in enabled drivers build config 00:02:21.932 bus/uacce: not in enabled drivers build config 00:02:21.932 bus/vmbus: not in enabled drivers build config 00:02:21.932 common/cnxk: not in enabled drivers build config 00:02:21.932 common/mlx5: not in enabled drivers build config 00:02:21.932 common/nfp: not in enabled drivers build config 00:02:21.932 common/nitrox: not in enabled drivers build config 00:02:21.932 common/qat: not in enabled drivers build config 00:02:21.932 common/sfc_efx: not in enabled drivers build config 00:02:21.932 mempool/bucket: not in enabled drivers build config 00:02:21.932 mempool/cnxk: not in enabled drivers build config 00:02:21.932 mempool/dpaa: not in enabled drivers build config 00:02:21.932 mempool/dpaa2: not in enabled drivers build config 00:02:21.932 mempool/octeontx: not in enabled drivers build config 00:02:21.932 mempool/stack: not in enabled drivers build config 00:02:21.932 dma/cnxk: not in enabled drivers build config 00:02:21.932 dma/dpaa: not in enabled drivers build config 00:02:21.932 dma/dpaa2: not in enabled drivers build config 00:02:21.932 dma/hisilicon: not in enabled drivers build config 00:02:21.932 dma/idxd: not in enabled drivers build config 00:02:21.932 dma/ioat: not in enabled drivers build config 00:02:21.932 dma/skeleton: not in enabled drivers build config 00:02:21.932 net/af_packet: not in enabled drivers build config 00:02:21.932 net/af_xdp: not in enabled drivers build config 00:02:21.932 net/ark: not in enabled drivers build config 00:02:21.932 net/atlantic: not in enabled drivers build config 00:02:21.932 net/avp: not in enabled drivers build config 00:02:21.932 net/axgbe: not in enabled drivers build config 00:02:21.932 net/bnx2x: not in enabled drivers build config 00:02:21.932 net/bnxt: not in enabled drivers build config 00:02:21.932 net/bonding: not in enabled drivers build config 00:02:21.932 net/cnxk: not in enabled drivers build config 00:02:21.932 net/cpfl: not in enabled drivers build config 00:02:21.932 net/cxgbe: not in enabled drivers build config 00:02:21.932 net/dpaa: not in enabled drivers build config 00:02:21.932 net/dpaa2: not in enabled drivers build config 00:02:21.932 net/e1000: not in enabled drivers build config 00:02:21.932 net/ena: not in enabled drivers build config 00:02:21.932 net/enetc: not in enabled drivers build config 00:02:21.932 net/enetfec: not in enabled drivers build config 00:02:21.932 net/enic: not in enabled drivers build config 00:02:21.932 net/failsafe: not in enabled drivers build config 00:02:21.932 net/fm10k: not in enabled drivers build config 00:02:21.932 net/gve: not in enabled drivers build config 00:02:21.932 net/hinic: not in enabled drivers build config 00:02:21.932 net/hns3: not in enabled drivers build config 00:02:21.932 net/i40e: not in enabled drivers build config 00:02:21.932 net/iavf: not in enabled drivers build config 00:02:21.932 net/ice: not in enabled drivers build config 00:02:21.932 net/idpf: not in enabled drivers build config 00:02:21.933 net/igc: not in enabled drivers build config 00:02:21.933 net/ionic: not in enabled drivers build config 00:02:21.933 net/ipn3ke: not in enabled drivers build config 00:02:21.933 net/ixgbe: not in enabled drivers build config 00:02:21.933 net/mana: not in enabled drivers build config 00:02:21.933 net/memif: not in enabled drivers build config 00:02:21.933 net/mlx4: not in enabled drivers build config 00:02:21.933 net/mlx5: not in enabled drivers build config 00:02:21.933 net/mvneta: not in enabled drivers build config 00:02:21.933 net/mvpp2: not in enabled drivers build config 00:02:21.933 net/netvsc: not in enabled drivers build config 00:02:21.933 net/nfb: not in enabled drivers build config 00:02:21.933 net/nfp: not in enabled drivers build config 00:02:21.933 net/ngbe: not in enabled drivers build config 00:02:21.933 net/null: not in enabled drivers build config 00:02:21.933 net/octeontx: not in enabled drivers build config 00:02:21.933 net/octeon_ep: not in enabled drivers build config 00:02:21.933 net/pcap: not in enabled drivers build config 00:02:21.933 net/pfe: not in enabled drivers build config 00:02:21.933 net/qede: not in enabled drivers build config 00:02:21.933 net/ring: not in enabled drivers build config 00:02:21.933 net/sfc: not in enabled drivers build config 00:02:21.933 net/softnic: not in enabled drivers build config 00:02:21.933 net/tap: not in enabled drivers build config 00:02:21.933 net/thunderx: not in enabled drivers build config 00:02:21.933 net/txgbe: not in enabled drivers build config 00:02:21.933 net/vdev_netvsc: not in enabled drivers build config 00:02:21.933 net/vhost: not in enabled drivers build config 00:02:21.933 net/virtio: not in enabled drivers build config 00:02:21.933 net/vmxnet3: not in enabled drivers build config 00:02:21.933 raw/*: missing internal dependency, "rawdev" 00:02:21.933 crypto/armv8: not in enabled drivers build config 00:02:21.933 crypto/bcmfs: not in enabled drivers build config 00:02:21.933 crypto/caam_jr: not in enabled drivers build config 00:02:21.933 crypto/ccp: not in enabled drivers build config 00:02:21.933 crypto/cnxk: not in enabled drivers build config 00:02:21.933 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.933 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.933 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.933 crypto/mlx5: not in enabled drivers build config 00:02:21.933 crypto/mvsam: not in enabled drivers build config 00:02:21.933 crypto/nitrox: not in enabled drivers build config 00:02:21.933 crypto/null: not in enabled drivers build config 00:02:21.933 crypto/octeontx: not in enabled drivers build config 00:02:21.933 crypto/openssl: not in enabled drivers build config 00:02:21.933 crypto/scheduler: not in enabled drivers build config 00:02:21.933 crypto/uadk: not in enabled drivers build config 00:02:21.933 crypto/virtio: not in enabled drivers build config 00:02:21.933 compress/isal: not in enabled drivers build config 00:02:21.933 compress/mlx5: not in enabled drivers build config 00:02:21.933 compress/nitrox: not in enabled drivers build config 00:02:21.933 compress/octeontx: not in enabled drivers build config 00:02:21.933 compress/zlib: not in enabled drivers build config 00:02:21.933 regex/*: missing internal dependency, "regexdev" 00:02:21.933 ml/*: missing internal dependency, "mldev" 00:02:21.933 vdpa/ifc: not in enabled drivers build config 00:02:21.933 vdpa/mlx5: not in enabled drivers build config 00:02:21.933 vdpa/nfp: not in enabled drivers build config 00:02:21.933 vdpa/sfc: not in enabled drivers build config 00:02:21.933 event/*: missing internal dependency, "eventdev" 00:02:21.933 baseband/*: missing internal dependency, "bbdev" 00:02:21.933 gpu/*: missing internal dependency, "gpudev" 00:02:21.933 00:02:21.933 00:02:21.933 Build targets in project: 85 00:02:21.933 00:02:21.933 DPDK 24.03.0 00:02:21.933 00:02:21.933 User defined options 00:02:21.933 buildtype : debug 00:02:21.933 default_library : shared 00:02:21.933 libdir : lib 00:02:21.933 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:21.933 b_sanitize : address 00:02:21.933 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:21.933 c_link_args : 00:02:21.933 cpu_instruction_set: native 00:02:21.933 disable_apps : test-acl,graph,test-dma-perf,test-gpudev,test-crypto-perf,test,test-security-perf,test-mldev,proc-info,test-pmd,test-pipeline,test-eventdev,test-cmdline,test-fib,pdump,test-flow-perf,test-bbdev,test-regex,test-sad,dumpcap,test-compress-perf 00:02:21.933 disable_libs : acl,bitratestats,graph,bbdev,jobstats,ipsec,gso,table,rib,node,mldev,sched,ip_frag,cfgfile,port,pcapng,pdcp,argparse,stack,eventdev,regexdev,distributor,gro,efd,pipeline,bpf,dispatcher,lpm,metrics,latencystats,pdump,gpudev,member,fib,rawdev 00:02:21.933 enable_docs : false 00:02:21.933 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:21.933 enable_kmods : false 00:02:21.933 max_lcores : 128 00:02:21.933 tests : false 00:02:21.933 00:02:21.933 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.933 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:21.933 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:21.933 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.933 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:21.933 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.933 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:21.933 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.933 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:21.933 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.933 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:21.933 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:21.933 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:21.933 [12/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:21.933 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:21.933 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.933 [15/268] Linking static target lib/librte_kvargs.a 00:02:21.933 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.933 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:21.933 [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:21.933 [19/268] Linking static target lib/librte_log.a 00:02:21.933 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:21.933 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:21.933 [22/268] Linking static target lib/librte_pci.a 00:02:21.933 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:21.933 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:21.933 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:21.933 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:21.933 [27/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:21.933 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:21.933 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:21.933 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:21.933 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:21.933 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:21.933 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.933 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:22.243 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.243 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:22.243 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:22.243 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.243 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:22.243 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:22.243 [41/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:22.243 [42/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:22.243 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:22.243 [44/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:22.243 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.243 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:22.243 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:22.243 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:22.243 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.243 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:22.243 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:22.243 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:22.243 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:22.243 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:22.243 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:22.243 [56/268] Linking static target lib/librte_meter.a 00:02:22.243 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.243 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:22.243 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:22.243 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:22.243 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:22.243 [62/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:22.243 [63/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:22.243 [64/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:22.243 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:22.243 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:22.243 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:22.243 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:22.243 [69/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:22.243 [70/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:22.243 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:22.243 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:22.243 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:22.243 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:22.243 [75/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:22.243 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:22.243 [77/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:22.243 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.243 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:22.243 [80/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.243 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:22.243 [82/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:22.243 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.243 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:22.243 [85/268] Linking static target lib/librte_telemetry.a 00:02:22.243 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:22.243 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:22.243 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:22.243 [89/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.243 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:22.243 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:22.243 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:22.243 [93/268] Linking static target lib/librte_ring.a 00:02:22.243 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:22.243 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.243 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:22.243 [97/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:22.243 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:22.243 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:22.243 [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:22.243 [101/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:22.243 [102/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.243 [103/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.243 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:22.243 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:22.243 [106/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:22.243 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:22.243 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:22.243 [109/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:22.243 [110/268] Linking static target lib/librte_cmdline.a 00:02:22.243 [111/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:22.243 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:22.243 [113/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:22.243 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:22.243 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:22.243 [116/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:22.559 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:22.559 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:22.559 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:22.559 [120/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:22.559 [121/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:22.559 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.559 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.559 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:22.559 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:22.559 [126/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:22.559 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:22.559 [128/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:22.559 [129/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:22.559 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.559 [131/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:22.559 [132/268] Linking static target lib/librte_timer.a 00:02:22.559 [133/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:22.559 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:22.559 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:22.559 [136/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:22.559 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:22.559 [138/268] Linking static target lib/librte_mempool.a 00:02:22.559 [139/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:22.559 [140/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.559 [141/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:22.559 [142/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:22.559 [143/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:22.559 [144/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:22.559 [145/268] Linking static target lib/librte_rcu.a 00:02:22.559 [146/268] Linking static target lib/librte_dmadev.a 00:02:22.559 [147/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:22.559 [148/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:22.559 [149/268] Linking static target lib/librte_eal.a 00:02:22.559 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:22.559 [151/268] Linking static target lib/librte_net.a 00:02:22.559 [152/268] Linking static target lib/librte_compressdev.a 00:02:22.559 [153/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.559 [154/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.559 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.559 [156/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.559 [157/268] Linking target lib/librte_log.so.24.1 00:02:22.559 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:22.559 [159/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.559 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.559 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:22.559 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.559 [163/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:22.559 [164/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:22.559 [165/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:22.559 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:22.559 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.559 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.817 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.817 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:22.817 [171/268] Linking static target lib/librte_power.a 00:02:22.817 [172/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:22.817 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:22.817 [174/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:22.817 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:22.817 [176/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:22.817 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:22.817 [178/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.817 [179/268] Linking target lib/librte_kvargs.so.24.1 00:02:22.817 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:22.817 [181/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:22.817 [182/268] Linking target lib/librte_telemetry.so.24.1 00:02:22.817 [183/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:22.817 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.817 [185/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.817 [186/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:22.817 [187/268] Linking static target lib/librte_reorder.a 00:02:22.817 [188/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:22.817 [189/268] Linking static target lib/librte_mbuf.a 00:02:22.817 [190/268] Linking static target lib/librte_security.a 00:02:22.817 [191/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.817 [192/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.817 [193/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.817 [194/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.818 [195/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:22.818 [196/268] Linking static target drivers/librte_bus_vdev.a 00:02:22.818 [197/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:22.818 [198/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:22.818 [199/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:23.075 [200/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:23.075 [201/268] Linking static target lib/librte_hash.a 00:02:23.075 [202/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.075 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.075 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:23.075 [205/268] Linking static target drivers/librte_bus_pci.a 00:02:23.075 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:23.075 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:23.075 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.075 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:23.075 [210/268] Linking static target drivers/librte_mempool_ring.a 00:02:23.333 [211/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.333 [212/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.333 [213/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:23.333 [214/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.333 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.333 [216/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:23.333 [217/268] Linking static target lib/librte_cryptodev.a 00:02:23.333 [218/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.590 [219/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.590 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.590 [221/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.847 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.847 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.847 [224/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.104 [225/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.104 [226/268] Linking static target lib/librte_ethdev.a 00:02:25.037 [227/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.603 [228/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.129 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.129 [230/268] Linking static target lib/librte_vhost.a 00:02:30.027 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.206 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.137 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.137 [234/268] Linking target lib/librte_eal.so.24.1 00:02:35.394 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:35.394 [236/268] Linking target lib/librte_timer.so.24.1 00:02:35.394 [237/268] Linking target lib/librte_meter.so.24.1 00:02:35.394 [238/268] Linking target lib/librte_ring.so.24.1 00:02:35.394 [239/268] Linking target lib/librte_pci.so.24.1 00:02:35.394 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:35.394 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:35.394 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:35.394 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:35.394 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:35.395 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:35.395 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:35.652 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:35.652 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:35.652 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:35.652 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:35.652 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:35.652 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:35.652 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:35.909 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.909 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:35.909 [256/268] Linking target lib/librte_net.so.24.1 00:02:35.909 [257/268] Linking target lib/librte_compressdev.so.24.1 00:02:35.909 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:35.909 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:36.167 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:36.167 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:36.167 [262/268] Linking target lib/librte_hash.so.24.1 00:02:36.167 [263/268] Linking target lib/librte_security.so.24.1 00:02:36.167 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:36.167 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:36.167 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:36.425 [267/268] Linking target lib/librte_power.so.24.1 00:02:36.425 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:36.425 INFO: autodetecting backend as ninja 00:02:36.425 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:42.980 CC lib/ut_mock/mock.o 00:02:42.980 CC lib/log/log.o 00:02:42.980 CC lib/log/log_flags.o 00:02:42.980 CC lib/log/log_deprecated.o 00:02:42.980 CC lib/ut/ut.o 00:02:42.980 LIB libspdk_ut_mock.a 00:02:42.980 LIB libspdk_log.a 00:02:42.980 LIB libspdk_ut.a 00:02:42.980 SO libspdk_ut_mock.so.6.0 00:02:42.980 SO libspdk_log.so.7.1 00:02:42.980 SO libspdk_ut.so.2.0 00:02:42.980 SYMLINK libspdk_ut_mock.so 00:02:42.980 SYMLINK libspdk_ut.so 00:02:42.980 SYMLINK libspdk_log.so 00:02:43.238 CC lib/ioat/ioat.o 00:02:43.238 CC lib/dma/dma.o 00:02:43.238 CXX lib/trace_parser/trace.o 00:02:43.238 CC lib/util/base64.o 00:02:43.238 CC lib/util/bit_array.o 00:02:43.238 CC lib/util/cpuset.o 00:02:43.238 CC lib/util/crc16.o 00:02:43.238 CC lib/util/crc32.o 00:02:43.238 CC lib/util/crc32c.o 00:02:43.238 CC lib/util/crc32_ieee.o 00:02:43.238 CC lib/util/crc64.o 00:02:43.238 CC lib/util/dif.o 00:02:43.238 CC lib/util/fd.o 00:02:43.238 CC lib/util/fd_group.o 00:02:43.238 CC lib/util/file.o 00:02:43.238 CC lib/util/hexlify.o 00:02:43.238 CC lib/util/iov.o 00:02:43.238 CC lib/util/math.o 00:02:43.238 CC lib/util/net.o 00:02:43.238 CC lib/util/pipe.o 00:02:43.238 CC lib/util/strerror_tls.o 00:02:43.238 CC lib/util/uuid.o 00:02:43.238 CC lib/util/string.o 00:02:43.238 CC lib/util/xor.o 00:02:43.238 CC lib/util/zipf.o 00:02:43.238 CC lib/util/md5.o 00:02:43.238 CC lib/vfio_user/host/vfio_user.o 00:02:43.238 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.495 LIB libspdk_dma.a 00:02:43.495 SO libspdk_dma.so.5.0 00:02:43.495 LIB libspdk_ioat.a 00:02:43.495 SYMLINK libspdk_dma.so 00:02:43.495 SO libspdk_ioat.so.7.0 00:02:43.495 SYMLINK libspdk_ioat.so 00:02:43.495 LIB libspdk_vfio_user.a 00:02:43.752 SO libspdk_vfio_user.so.5.0 00:02:43.752 SYMLINK libspdk_vfio_user.so 00:02:43.752 LIB libspdk_util.a 00:02:43.752 SO libspdk_util.so.10.1 00:02:44.010 SYMLINK libspdk_util.so 00:02:44.010 LIB libspdk_trace_parser.a 00:02:44.010 SO libspdk_trace_parser.so.6.0 00:02:44.267 SYMLINK libspdk_trace_parser.so 00:02:44.267 CC lib/json/json_util.o 00:02:44.267 CC lib/json/json_parse.o 00:02:44.267 CC lib/json/json_write.o 00:02:44.267 CC lib/env_dpdk/env.o 00:02:44.267 CC lib/env_dpdk/memory.o 00:02:44.267 CC lib/env_dpdk/pci.o 00:02:44.267 CC lib/env_dpdk/init.o 00:02:44.267 CC lib/env_dpdk/threads.o 00:02:44.267 CC lib/env_dpdk/pci_ioat.o 00:02:44.267 CC lib/env_dpdk/pci_virtio.o 00:02:44.267 CC lib/env_dpdk/pci_vmd.o 00:02:44.267 CC lib/env_dpdk/pci_idxd.o 00:02:44.267 CC lib/env_dpdk/pci_event.o 00:02:44.267 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:44.267 CC lib/env_dpdk/sigbus_handler.o 00:02:44.267 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:44.267 CC lib/env_dpdk/pci_dpdk.o 00:02:44.267 CC lib/idxd/idxd.o 00:02:44.267 CC lib/rdma_utils/rdma_utils.o 00:02:44.267 CC lib/idxd/idxd_user.o 00:02:44.267 CC lib/idxd/idxd_kernel.o 00:02:44.267 CC lib/vmd/led.o 00:02:44.267 CC lib/conf/conf.o 00:02:44.267 CC lib/vmd/vmd.o 00:02:44.526 LIB libspdk_conf.a 00:02:44.526 LIB libspdk_json.a 00:02:44.526 LIB libspdk_rdma_utils.a 00:02:44.526 SO libspdk_conf.so.6.0 00:02:44.526 SO libspdk_json.so.6.0 00:02:44.526 SO libspdk_rdma_utils.so.1.0 00:02:44.783 SYMLINK libspdk_conf.so 00:02:44.783 SYMLINK libspdk_json.so 00:02:44.784 SYMLINK libspdk_rdma_utils.so 00:02:45.041 LIB libspdk_idxd.a 00:02:45.041 LIB libspdk_vmd.a 00:02:45.041 SO libspdk_idxd.so.12.1 00:02:45.041 SO libspdk_vmd.so.6.0 00:02:45.041 CC lib/jsonrpc/jsonrpc_server.o 00:02:45.041 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:45.041 CC lib/jsonrpc/jsonrpc_client.o 00:02:45.041 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:45.041 CC lib/rdma_provider/common.o 00:02:45.041 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:45.041 SYMLINK libspdk_idxd.so 00:02:45.041 SYMLINK libspdk_vmd.so 00:02:45.298 LIB libspdk_rdma_provider.a 00:02:45.298 LIB libspdk_jsonrpc.a 00:02:45.298 SO libspdk_rdma_provider.so.7.0 00:02:45.298 SO libspdk_jsonrpc.so.6.0 00:02:45.298 SYMLINK libspdk_rdma_provider.so 00:02:45.298 SYMLINK libspdk_jsonrpc.so 00:02:45.555 LIB libspdk_env_dpdk.a 00:02:45.813 SO libspdk_env_dpdk.so.15.1 00:02:45.813 CC lib/rpc/rpc.o 00:02:45.813 SYMLINK libspdk_env_dpdk.so 00:02:46.070 LIB libspdk_rpc.a 00:02:46.070 SO libspdk_rpc.so.6.0 00:02:46.070 SYMLINK libspdk_rpc.so 00:02:46.328 CC lib/trace/trace.o 00:02:46.328 CC lib/trace/trace_flags.o 00:02:46.328 CC lib/notify/notify.o 00:02:46.328 CC lib/trace/trace_rpc.o 00:02:46.328 CC lib/notify/notify_rpc.o 00:02:46.328 CC lib/keyring/keyring.o 00:02:46.328 CC lib/keyring/keyring_rpc.o 00:02:46.585 LIB libspdk_notify.a 00:02:46.585 SO libspdk_notify.so.6.0 00:02:46.585 LIB libspdk_keyring.a 00:02:46.585 LIB libspdk_trace.a 00:02:46.585 SYMLINK libspdk_notify.so 00:02:46.585 SO libspdk_keyring.so.2.0 00:02:46.585 SO libspdk_trace.so.11.0 00:02:46.843 SYMLINK libspdk_keyring.so 00:02:46.843 SYMLINK libspdk_trace.so 00:02:47.101 CC lib/sock/sock.o 00:02:47.101 CC lib/sock/sock_rpc.o 00:02:47.101 CC lib/thread/thread.o 00:02:47.101 CC lib/thread/iobuf.o 00:02:47.359 LIB libspdk_sock.a 00:02:47.616 SO libspdk_sock.so.10.0 00:02:47.616 SYMLINK libspdk_sock.so 00:02:47.874 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:47.874 CC lib/nvme/nvme_ns.o 00:02:47.874 CC lib/nvme/nvme_ctrlr.o 00:02:47.874 CC lib/nvme/nvme_fabric.o 00:02:47.874 CC lib/nvme/nvme_ns_cmd.o 00:02:47.874 CC lib/nvme/nvme_pcie_common.o 00:02:47.874 CC lib/nvme/nvme_pcie.o 00:02:47.874 CC lib/nvme/nvme_transport.o 00:02:47.874 CC lib/nvme/nvme_qpair.o 00:02:47.874 CC lib/nvme/nvme.o 00:02:47.874 CC lib/nvme/nvme_quirks.o 00:02:47.874 CC lib/nvme/nvme_discovery.o 00:02:47.874 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:47.874 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:47.874 CC lib/nvme/nvme_tcp.o 00:02:47.874 CC lib/nvme/nvme_opal.o 00:02:47.874 CC lib/nvme/nvme_auth.o 00:02:47.874 CC lib/nvme/nvme_io_msg.o 00:02:47.874 CC lib/nvme/nvme_poll_group.o 00:02:47.874 CC lib/nvme/nvme_zns.o 00:02:47.874 CC lib/nvme/nvme_stubs.o 00:02:47.874 CC lib/nvme/nvme_rdma.o 00:02:47.874 CC lib/nvme/nvme_cuse.o 00:02:48.439 LIB libspdk_thread.a 00:02:48.439 SO libspdk_thread.so.11.0 00:02:48.696 SYMLINK libspdk_thread.so 00:02:48.954 CC lib/init/json_config.o 00:02:48.954 CC lib/blob/blobstore.o 00:02:48.954 CC lib/init/subsystem.o 00:02:48.954 CC lib/init/subsystem_rpc.o 00:02:48.954 CC lib/blob/request.o 00:02:48.954 CC lib/init/rpc.o 00:02:48.954 CC lib/blob/zeroes.o 00:02:48.954 CC lib/blob/blob_bs_dev.o 00:02:48.954 CC lib/virtio/virtio.o 00:02:48.954 CC lib/virtio/virtio_vhost_user.o 00:02:48.954 CC lib/accel/accel.o 00:02:48.954 CC lib/virtio/virtio_vfio_user.o 00:02:48.954 CC lib/accel/accel_rpc.o 00:02:48.954 CC lib/virtio/virtio_pci.o 00:02:48.954 CC lib/accel/accel_sw.o 00:02:48.954 CC lib/fsdev/fsdev.o 00:02:48.954 CC lib/fsdev/fsdev_io.o 00:02:48.954 CC lib/fsdev/fsdev_rpc.o 00:02:49.211 LIB libspdk_init.a 00:02:49.211 SO libspdk_init.so.6.0 00:02:49.211 SYMLINK libspdk_init.so 00:02:49.211 LIB libspdk_virtio.a 00:02:49.211 SO libspdk_virtio.so.7.0 00:02:49.468 SYMLINK libspdk_virtio.so 00:02:49.468 LIB libspdk_fsdev.a 00:02:49.468 SO libspdk_fsdev.so.2.0 00:02:49.468 CC lib/event/app.o 00:02:49.468 CC lib/event/reactor.o 00:02:49.468 CC lib/event/scheduler_static.o 00:02:49.468 CC lib/event/log_rpc.o 00:02:49.468 CC lib/event/app_rpc.o 00:02:49.726 SYMLINK libspdk_fsdev.so 00:02:49.983 LIB libspdk_nvme.a 00:02:49.983 LIB libspdk_accel.a 00:02:49.983 SO libspdk_accel.so.16.0 00:02:49.983 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:49.983 LIB libspdk_event.a 00:02:49.983 SO libspdk_nvme.so.15.0 00:02:49.983 SYMLINK libspdk_accel.so 00:02:49.983 SO libspdk_event.so.14.0 00:02:50.240 SYMLINK libspdk_event.so 00:02:50.240 SYMLINK libspdk_nvme.so 00:02:50.498 CC lib/bdev/bdev.o 00:02:50.498 CC lib/bdev/bdev_rpc.o 00:02:50.498 CC lib/bdev/scsi_nvme.o 00:02:50.498 CC lib/bdev/bdev_zone.o 00:02:50.498 CC lib/bdev/part.o 00:02:50.498 LIB libspdk_fuse_dispatcher.a 00:02:50.498 SO libspdk_fuse_dispatcher.so.1.0 00:02:50.756 SYMLINK libspdk_fuse_dispatcher.so 00:02:51.689 LIB libspdk_blob.a 00:02:51.947 SO libspdk_blob.so.11.0 00:02:51.947 SYMLINK libspdk_blob.so 00:02:52.204 CC lib/blobfs/blobfs.o 00:02:52.204 CC lib/blobfs/tree.o 00:02:52.204 CC lib/lvol/lvol.o 00:02:52.767 LIB libspdk_bdev.a 00:02:52.767 SO libspdk_bdev.so.17.0 00:02:53.025 SYMLINK libspdk_bdev.so 00:02:53.025 LIB libspdk_blobfs.a 00:02:53.025 SO libspdk_blobfs.so.10.0 00:02:53.284 LIB libspdk_lvol.a 00:02:53.284 SYMLINK libspdk_blobfs.so 00:02:53.284 SO libspdk_lvol.so.10.0 00:02:53.284 CC lib/ublk/ublk.o 00:02:53.284 CC lib/ublk/ublk_rpc.o 00:02:53.284 CC lib/nvmf/ctrlr.o 00:02:53.284 CC lib/nvmf/ctrlr_discovery.o 00:02:53.284 CC lib/nvmf/ctrlr_bdev.o 00:02:53.284 CC lib/nbd/nbd.o 00:02:53.284 CC lib/nvmf/subsystem.o 00:02:53.284 CC lib/nvmf/nvmf.o 00:02:53.284 CC lib/nbd/nbd_rpc.o 00:02:53.284 CC lib/nvmf/transport.o 00:02:53.284 CC lib/nvmf/nvmf_rpc.o 00:02:53.284 CC lib/nvmf/mdns_server.o 00:02:53.284 CC lib/nvmf/tcp.o 00:02:53.284 CC lib/ftl/ftl_core.o 00:02:53.284 CC lib/nvmf/stubs.o 00:02:53.284 CC lib/ftl/ftl_init.o 00:02:53.284 CC lib/nvmf/rdma.o 00:02:53.284 CC lib/ftl/ftl_layout.o 00:02:53.284 CC lib/nvmf/auth.o 00:02:53.284 CC lib/ftl/ftl_debug.o 00:02:53.284 CC lib/ftl/ftl_sb.o 00:02:53.284 CC lib/ftl/ftl_io.o 00:02:53.284 CC lib/scsi/dev.o 00:02:53.284 CC lib/scsi/lun.o 00:02:53.284 CC lib/ftl/ftl_l2p.o 00:02:53.284 CC lib/ftl/ftl_l2p_flat.o 00:02:53.284 CC lib/scsi/port.o 00:02:53.284 CC lib/scsi/scsi.o 00:02:53.284 CC lib/ftl/ftl_nv_cache.o 00:02:53.284 CC lib/scsi/scsi_bdev.o 00:02:53.284 CC lib/ftl/ftl_band.o 00:02:53.284 CC lib/ftl/ftl_band_ops.o 00:02:53.284 CC lib/scsi/scsi_pr.o 00:02:53.284 CC lib/scsi/scsi_rpc.o 00:02:53.284 CC lib/ftl/ftl_writer.o 00:02:53.284 CC lib/ftl/ftl_reloc.o 00:02:53.284 CC lib/scsi/task.o 00:02:53.284 CC lib/ftl/ftl_rq.o 00:02:53.284 CC lib/ftl/ftl_l2p_cache.o 00:02:53.284 SYMLINK libspdk_lvol.so 00:02:53.284 CC lib/ftl/ftl_p2l.o 00:02:53.284 CC lib/ftl/ftl_p2l_log.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:53.284 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:53.284 CC lib/ftl/utils/ftl_md.o 00:02:53.284 CC lib/ftl/utils/ftl_conf.o 00:02:53.284 CC lib/ftl/utils/ftl_mempool.o 00:02:53.284 CC lib/ftl/utils/ftl_bitmap.o 00:02:53.284 CC lib/ftl/utils/ftl_property.o 00:02:53.284 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:53.284 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:53.284 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:53.284 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:53.284 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:53.284 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:53.284 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:53.284 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:53.284 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:53.284 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:53.284 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:53.284 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:53.284 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:53.284 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.284 CC lib/ftl/base/ftl_base_dev.o 00:02:53.284 CC lib/ftl/ftl_trace.o 00:02:53.850 LIB libspdk_nbd.a 00:02:54.107 SO libspdk_nbd.so.7.0 00:02:54.107 SYMLINK libspdk_nbd.so 00:02:54.107 LIB libspdk_scsi.a 00:02:54.107 SO libspdk_scsi.so.9.0 00:02:54.107 LIB libspdk_ublk.a 00:02:54.107 SO libspdk_ublk.so.3.0 00:02:54.107 SYMLINK libspdk_scsi.so 00:02:54.364 SYMLINK libspdk_ublk.so 00:02:54.621 LIB libspdk_ftl.a 00:02:54.621 CC lib/vhost/vhost.o 00:02:54.621 CC lib/vhost/vhost_rpc.o 00:02:54.621 CC lib/vhost/vhost_scsi.o 00:02:54.621 CC lib/vhost/rte_vhost_user.o 00:02:54.621 CC lib/vhost/vhost_blk.o 00:02:54.621 CC lib/iscsi/conn.o 00:02:54.621 CC lib/iscsi/init_grp.o 00:02:54.621 CC lib/iscsi/iscsi.o 00:02:54.621 CC lib/iscsi/param.o 00:02:54.621 CC lib/iscsi/portal_grp.o 00:02:54.621 CC lib/iscsi/tgt_node.o 00:02:54.621 CC lib/iscsi/iscsi_subsystem.o 00:02:54.621 CC lib/iscsi/iscsi_rpc.o 00:02:54.621 CC lib/iscsi/task.o 00:02:54.621 SO libspdk_ftl.so.9.0 00:02:54.879 SYMLINK libspdk_ftl.so 00:02:55.447 LIB libspdk_vhost.a 00:02:55.447 LIB libspdk_nvmf.a 00:02:55.447 SO libspdk_vhost.so.8.0 00:02:55.705 SO libspdk_nvmf.so.20.0 00:02:55.705 SYMLINK libspdk_vhost.so 00:02:55.705 SYMLINK libspdk_nvmf.so 00:02:55.963 LIB libspdk_iscsi.a 00:02:55.963 SO libspdk_iscsi.so.8.0 00:02:56.222 SYMLINK libspdk_iscsi.so 00:02:56.789 CC module/env_dpdk/env_dpdk_rpc.o 00:02:56.789 CC module/accel/dsa/accel_dsa.o 00:02:56.789 CC module/accel/dsa/accel_dsa_rpc.o 00:02:56.789 LIB libspdk_env_dpdk_rpc.a 00:02:56.789 CC module/keyring/linux/keyring.o 00:02:56.789 CC module/keyring/linux/keyring_rpc.o 00:02:56.789 CC module/keyring/file/keyring_rpc.o 00:02:56.789 CC module/keyring/file/keyring.o 00:02:56.789 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:56.789 CC module/fsdev/aio/fsdev_aio.o 00:02:56.789 CC module/fsdev/aio/linux_aio_mgr.o 00:02:56.789 CC module/blob/bdev/blob_bdev.o 00:02:56.789 CC module/sock/posix/posix.o 00:02:56.789 CC module/accel/ioat/accel_ioat.o 00:02:56.789 CC module/accel/ioat/accel_ioat_rpc.o 00:02:56.789 CC module/accel/error/accel_error_rpc.o 00:02:56.789 CC module/accel/error/accel_error.o 00:02:56.789 CC module/accel/iaa/accel_iaa.o 00:02:56.789 CC module/accel/iaa/accel_iaa_rpc.o 00:02:56.789 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:56.789 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:56.789 SO libspdk_env_dpdk_rpc.so.6.0 00:02:56.789 CC module/scheduler/gscheduler/gscheduler.o 00:02:56.789 SYMLINK libspdk_env_dpdk_rpc.so 00:02:57.046 LIB libspdk_keyring_file.a 00:02:57.046 LIB libspdk_keyring_linux.a 00:02:57.046 LIB libspdk_scheduler_dpdk_governor.a 00:02:57.046 SO libspdk_keyring_file.so.2.0 00:02:57.046 SO libspdk_keyring_linux.so.1.0 00:02:57.046 LIB libspdk_scheduler_gscheduler.a 00:02:57.046 LIB libspdk_accel_ioat.a 00:02:57.046 LIB libspdk_accel_error.a 00:02:57.046 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:57.046 LIB libspdk_accel_iaa.a 00:02:57.046 LIB libspdk_scheduler_dynamic.a 00:02:57.046 SO libspdk_scheduler_gscheduler.so.4.0 00:02:57.046 SO libspdk_accel_ioat.so.6.0 00:02:57.046 SO libspdk_accel_iaa.so.3.0 00:02:57.046 LIB libspdk_accel_dsa.a 00:02:57.046 SO libspdk_accel_error.so.2.0 00:02:57.046 SYMLINK libspdk_keyring_file.so 00:02:57.046 SYMLINK libspdk_keyring_linux.so 00:02:57.047 LIB libspdk_blob_bdev.a 00:02:57.047 SO libspdk_scheduler_dynamic.so.4.0 00:02:57.047 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:57.047 SYMLINK libspdk_scheduler_gscheduler.so 00:02:57.047 SO libspdk_accel_dsa.so.5.0 00:02:57.047 SO libspdk_blob_bdev.so.11.0 00:02:57.047 SYMLINK libspdk_accel_ioat.so 00:02:57.047 SYMLINK libspdk_accel_iaa.so 00:02:57.047 SYMLINK libspdk_accel_error.so 00:02:57.047 SYMLINK libspdk_scheduler_dynamic.so 00:02:57.305 SYMLINK libspdk_blob_bdev.so 00:02:57.305 SYMLINK libspdk_accel_dsa.so 00:02:57.562 LIB libspdk_fsdev_aio.a 00:02:57.562 SO libspdk_fsdev_aio.so.1.0 00:02:57.562 LIB libspdk_sock_posix.a 00:02:57.562 SO libspdk_sock_posix.so.6.0 00:02:57.562 SYMLINK libspdk_fsdev_aio.so 00:02:57.562 SYMLINK libspdk_sock_posix.so 00:02:57.562 CC module/bdev/malloc/bdev_malloc.o 00:02:57.562 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:57.821 CC module/blobfs/bdev/blobfs_bdev.o 00:02:57.821 CC module/bdev/null/bdev_null.o 00:02:57.821 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:57.821 CC module/bdev/null/bdev_null_rpc.o 00:02:57.821 CC module/bdev/lvol/vbdev_lvol.o 00:02:57.821 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:57.821 CC module/bdev/delay/vbdev_delay.o 00:02:57.821 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:57.821 CC module/bdev/split/vbdev_split.o 00:02:57.821 CC module/bdev/split/vbdev_split_rpc.o 00:02:57.821 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:57.821 CC module/bdev/gpt/gpt.o 00:02:57.821 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:57.821 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:57.821 CC module/bdev/gpt/vbdev_gpt.o 00:02:57.821 CC module/bdev/iscsi/bdev_iscsi.o 00:02:57.821 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:57.821 CC module/bdev/aio/bdev_aio.o 00:02:57.821 CC module/bdev/error/vbdev_error.o 00:02:57.821 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:57.821 CC module/bdev/ftl/bdev_ftl.o 00:02:57.821 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:57.821 CC module/bdev/aio/bdev_aio_rpc.o 00:02:57.821 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:57.821 CC module/bdev/error/vbdev_error_rpc.o 00:02:57.821 CC module/bdev/raid/bdev_raid.o 00:02:57.821 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.821 CC module/bdev/passthru/vbdev_passthru.o 00:02:57.821 CC module/bdev/raid/bdev_raid_sb.o 00:02:57.821 CC module/bdev/nvme/bdev_nvme.o 00:02:57.821 CC module/bdev/raid/raid0.o 00:02:57.821 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.821 CC module/bdev/raid/raid1.o 00:02:57.821 CC module/bdev/raid/concat.o 00:02:57.821 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.821 CC module/bdev/nvme/nvme_rpc.o 00:02:57.821 CC module/bdev/nvme/vbdev_opal.o 00:02:57.821 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.821 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:57.821 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:57.821 LIB libspdk_blobfs_bdev.a 00:02:58.113 SO libspdk_blobfs_bdev.so.6.0 00:02:58.113 LIB libspdk_bdev_split.a 00:02:58.113 SYMLINK libspdk_blobfs_bdev.so 00:02:58.113 LIB libspdk_bdev_null.a 00:02:58.113 SO libspdk_bdev_split.so.6.0 00:02:58.113 SO libspdk_bdev_null.so.6.0 00:02:58.113 LIB libspdk_bdev_gpt.a 00:02:58.113 LIB libspdk_bdev_error.a 00:02:58.113 LIB libspdk_bdev_passthru.a 00:02:58.113 LIB libspdk_bdev_ftl.a 00:02:58.113 SYMLINK libspdk_bdev_split.so 00:02:58.113 SO libspdk_bdev_gpt.so.6.0 00:02:58.113 SO libspdk_bdev_error.so.6.0 00:02:58.113 LIB libspdk_bdev_aio.a 00:02:58.113 SO libspdk_bdev_passthru.so.6.0 00:02:58.113 SYMLINK libspdk_bdev_null.so 00:02:58.113 LIB libspdk_bdev_zone_block.a 00:02:58.113 SO libspdk_bdev_ftl.so.6.0 00:02:58.113 LIB libspdk_bdev_malloc.a 00:02:58.113 LIB libspdk_bdev_iscsi.a 00:02:58.113 LIB libspdk_bdev_delay.a 00:02:58.113 SO libspdk_bdev_aio.so.6.0 00:02:58.113 SYMLINK libspdk_bdev_gpt.so 00:02:58.113 SO libspdk_bdev_malloc.so.6.0 00:02:58.113 SO libspdk_bdev_delay.so.6.0 00:02:58.113 SO libspdk_bdev_zone_block.so.6.0 00:02:58.113 SYMLINK libspdk_bdev_error.so 00:02:58.113 SO libspdk_bdev_iscsi.so.6.0 00:02:58.113 SYMLINK libspdk_bdev_passthru.so 00:02:58.437 SYMLINK libspdk_bdev_ftl.so 00:02:58.437 SYMLINK libspdk_bdev_aio.so 00:02:58.437 SYMLINK libspdk_bdev_delay.so 00:02:58.437 SYMLINK libspdk_bdev_iscsi.so 00:02:58.437 SYMLINK libspdk_bdev_zone_block.so 00:02:58.437 LIB libspdk_bdev_lvol.a 00:02:58.437 SYMLINK libspdk_bdev_malloc.so 00:02:58.437 SO libspdk_bdev_lvol.so.6.0 00:02:58.437 LIB libspdk_bdev_virtio.a 00:02:58.437 SO libspdk_bdev_virtio.so.6.0 00:02:58.437 SYMLINK libspdk_bdev_lvol.so 00:02:58.437 SYMLINK libspdk_bdev_virtio.so 00:02:58.696 LIB libspdk_bdev_raid.a 00:02:58.696 SO libspdk_bdev_raid.so.6.0 00:02:58.954 SYMLINK libspdk_bdev_raid.so 00:03:00.332 LIB libspdk_bdev_nvme.a 00:03:00.332 SO libspdk_bdev_nvme.so.7.1 00:03:00.332 SYMLINK libspdk_bdev_nvme.so 00:03:00.900 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:00.900 CC module/event/subsystems/iobuf/iobuf.o 00:03:00.900 CC module/event/subsystems/vmd/vmd.o 00:03:00.900 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:00.900 CC module/event/subsystems/scheduler/scheduler.o 00:03:00.900 CC module/event/subsystems/fsdev/fsdev.o 00:03:00.900 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:00.900 CC module/event/subsystems/sock/sock.o 00:03:00.900 CC module/event/subsystems/keyring/keyring.o 00:03:01.160 LIB libspdk_event_vmd.a 00:03:01.160 LIB libspdk_event_fsdev.a 00:03:01.160 LIB libspdk_event_vhost_blk.a 00:03:01.160 LIB libspdk_event_scheduler.a 00:03:01.160 LIB libspdk_event_keyring.a 00:03:01.160 LIB libspdk_event_iobuf.a 00:03:01.160 LIB libspdk_event_sock.a 00:03:01.160 SO libspdk_event_vmd.so.6.0 00:03:01.160 SO libspdk_event_vhost_blk.so.3.0 00:03:01.160 SO libspdk_event_fsdev.so.1.0 00:03:01.160 SO libspdk_event_scheduler.so.4.0 00:03:01.160 SO libspdk_event_keyring.so.1.0 00:03:01.160 SO libspdk_event_iobuf.so.3.0 00:03:01.160 SO libspdk_event_sock.so.5.0 00:03:01.160 SYMLINK libspdk_event_vmd.so 00:03:01.160 SYMLINK libspdk_event_sock.so 00:03:01.160 SYMLINK libspdk_event_vhost_blk.so 00:03:01.160 SYMLINK libspdk_event_scheduler.so 00:03:01.160 SYMLINK libspdk_event_fsdev.so 00:03:01.160 SYMLINK libspdk_event_keyring.so 00:03:01.160 SYMLINK libspdk_event_iobuf.so 00:03:01.727 CC module/event/subsystems/accel/accel.o 00:03:01.728 LIB libspdk_event_accel.a 00:03:01.728 SO libspdk_event_accel.so.6.0 00:03:01.728 SYMLINK libspdk_event_accel.so 00:03:02.296 CC module/event/subsystems/bdev/bdev.o 00:03:02.296 LIB libspdk_event_bdev.a 00:03:02.296 SO libspdk_event_bdev.so.6.0 00:03:02.555 SYMLINK libspdk_event_bdev.so 00:03:02.814 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:02.814 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:02.814 CC module/event/subsystems/nbd/nbd.o 00:03:02.814 CC module/event/subsystems/scsi/scsi.o 00:03:02.814 CC module/event/subsystems/ublk/ublk.o 00:03:03.072 LIB libspdk_event_nbd.a 00:03:03.072 LIB libspdk_event_ublk.a 00:03:03.072 LIB libspdk_event_scsi.a 00:03:03.072 SO libspdk_event_nbd.so.6.0 00:03:03.072 LIB libspdk_event_nvmf.a 00:03:03.072 SO libspdk_event_ublk.so.3.0 00:03:03.072 SO libspdk_event_scsi.so.6.0 00:03:03.072 SO libspdk_event_nvmf.so.6.0 00:03:03.072 SYMLINK libspdk_event_nbd.so 00:03:03.072 SYMLINK libspdk_event_ublk.so 00:03:03.072 SYMLINK libspdk_event_scsi.so 00:03:03.072 SYMLINK libspdk_event_nvmf.so 00:03:03.640 CC module/event/subsystems/iscsi/iscsi.o 00:03:03.640 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:03.640 LIB libspdk_event_vhost_scsi.a 00:03:03.640 LIB libspdk_event_iscsi.a 00:03:03.640 SO libspdk_event_vhost_scsi.so.3.0 00:03:03.640 SO libspdk_event_iscsi.so.6.0 00:03:03.640 SYMLINK libspdk_event_vhost_scsi.so 00:03:03.640 SYMLINK libspdk_event_iscsi.so 00:03:03.897 SO libspdk.so.6.0 00:03:03.897 SYMLINK libspdk.so 00:03:04.155 CC app/spdk_lspci/spdk_lspci.o 00:03:04.426 CC app/spdk_nvme_perf/perf.o 00:03:04.426 CXX app/trace/trace.o 00:03:04.426 CC app/spdk_top/spdk_top.o 00:03:04.426 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.426 CC app/spdk_nvme_identify/identify.o 00:03:04.426 CC app/trace_record/trace_record.o 00:03:04.426 CC test/rpc_client/rpc_client_test.o 00:03:04.426 TEST_HEADER include/spdk/accel.h 00:03:04.426 TEST_HEADER include/spdk/assert.h 00:03:04.426 TEST_HEADER include/spdk/accel_module.h 00:03:04.426 TEST_HEADER include/spdk/barrier.h 00:03:04.426 TEST_HEADER include/spdk/base64.h 00:03:04.426 TEST_HEADER include/spdk/bdev_module.h 00:03:04.426 TEST_HEADER include/spdk/bdev.h 00:03:04.426 TEST_HEADER include/spdk/bit_array.h 00:03:04.426 TEST_HEADER include/spdk/bdev_zone.h 00:03:04.426 TEST_HEADER include/spdk/blob_bdev.h 00:03:04.426 TEST_HEADER include/spdk/bit_pool.h 00:03:04.426 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:04.426 TEST_HEADER include/spdk/conf.h 00:03:04.426 TEST_HEADER include/spdk/blobfs.h 00:03:04.426 TEST_HEADER include/spdk/blob.h 00:03:04.427 CC app/nvmf_tgt/nvmf_main.o 00:03:04.427 TEST_HEADER include/spdk/cpuset.h 00:03:04.427 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:04.427 TEST_HEADER include/spdk/crc16.h 00:03:04.427 TEST_HEADER include/spdk/config.h 00:03:04.427 TEST_HEADER include/spdk/crc32.h 00:03:04.427 TEST_HEADER include/spdk/crc64.h 00:03:04.427 TEST_HEADER include/spdk/dif.h 00:03:04.427 TEST_HEADER include/spdk/dma.h 00:03:04.427 TEST_HEADER include/spdk/endian.h 00:03:04.427 TEST_HEADER include/spdk/env_dpdk.h 00:03:04.427 TEST_HEADER include/spdk/env.h 00:03:04.427 TEST_HEADER include/spdk/event.h 00:03:04.427 TEST_HEADER include/spdk/fd.h 00:03:04.427 TEST_HEADER include/spdk/fd_group.h 00:03:04.427 TEST_HEADER include/spdk/file.h 00:03:04.427 TEST_HEADER include/spdk/fsdev.h 00:03:04.427 TEST_HEADER include/spdk/fsdev_module.h 00:03:04.427 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:04.427 TEST_HEADER include/spdk/ftl.h 00:03:04.427 TEST_HEADER include/spdk/gpt_spec.h 00:03:04.427 TEST_HEADER include/spdk/hexlify.h 00:03:04.427 TEST_HEADER include/spdk/histogram_data.h 00:03:04.427 TEST_HEADER include/spdk/idxd_spec.h 00:03:04.427 TEST_HEADER include/spdk/ioat.h 00:03:04.427 TEST_HEADER include/spdk/idxd.h 00:03:04.427 TEST_HEADER include/spdk/ioat_spec.h 00:03:04.427 TEST_HEADER include/spdk/init.h 00:03:04.427 TEST_HEADER include/spdk/json.h 00:03:04.427 TEST_HEADER include/spdk/iscsi_spec.h 00:03:04.427 TEST_HEADER include/spdk/jsonrpc.h 00:03:04.427 TEST_HEADER include/spdk/keyring_module.h 00:03:04.427 TEST_HEADER include/spdk/keyring.h 00:03:04.427 CC app/spdk_dd/spdk_dd.o 00:03:04.427 TEST_HEADER include/spdk/likely.h 00:03:04.427 TEST_HEADER include/spdk/lvol.h 00:03:04.427 TEST_HEADER include/spdk/log.h 00:03:04.427 TEST_HEADER include/spdk/md5.h 00:03:04.427 CC app/iscsi_tgt/iscsi_tgt.o 00:03:04.427 TEST_HEADER include/spdk/mmio.h 00:03:04.427 TEST_HEADER include/spdk/memory.h 00:03:04.427 TEST_HEADER include/spdk/net.h 00:03:04.427 TEST_HEADER include/spdk/nbd.h 00:03:04.427 TEST_HEADER include/spdk/notify.h 00:03:04.427 TEST_HEADER include/spdk/nvme_intel.h 00:03:04.427 TEST_HEADER include/spdk/nvme.h 00:03:04.427 TEST_HEADER include/spdk/nvme_spec.h 00:03:04.427 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:04.427 TEST_HEADER include/spdk/nvme_zns.h 00:03:04.427 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:04.427 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:04.427 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:04.427 TEST_HEADER include/spdk/nvmf.h 00:03:04.427 TEST_HEADER include/spdk/nvmf_transport.h 00:03:04.427 TEST_HEADER include/spdk/nvmf_spec.h 00:03:04.427 TEST_HEADER include/spdk/opal.h 00:03:04.427 TEST_HEADER include/spdk/opal_spec.h 00:03:04.427 TEST_HEADER include/spdk/pci_ids.h 00:03:04.427 TEST_HEADER include/spdk/queue.h 00:03:04.427 TEST_HEADER include/spdk/pipe.h 00:03:04.427 TEST_HEADER include/spdk/rpc.h 00:03:04.427 CC app/spdk_tgt/spdk_tgt.o 00:03:04.427 TEST_HEADER include/spdk/reduce.h 00:03:04.427 TEST_HEADER include/spdk/scheduler.h 00:03:04.427 TEST_HEADER include/spdk/scsi.h 00:03:04.427 TEST_HEADER include/spdk/scsi_spec.h 00:03:04.427 TEST_HEADER include/spdk/stdinc.h 00:03:04.427 TEST_HEADER include/spdk/sock.h 00:03:04.427 TEST_HEADER include/spdk/string.h 00:03:04.427 TEST_HEADER include/spdk/thread.h 00:03:04.427 TEST_HEADER include/spdk/trace.h 00:03:04.427 TEST_HEADER include/spdk/trace_parser.h 00:03:04.427 TEST_HEADER include/spdk/tree.h 00:03:04.427 TEST_HEADER include/spdk/ublk.h 00:03:04.427 TEST_HEADER include/spdk/util.h 00:03:04.427 TEST_HEADER include/spdk/uuid.h 00:03:04.427 TEST_HEADER include/spdk/version.h 00:03:04.427 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:04.427 TEST_HEADER include/spdk/vhost.h 00:03:04.427 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:04.427 TEST_HEADER include/spdk/vmd.h 00:03:04.427 TEST_HEADER include/spdk/xor.h 00:03:04.427 TEST_HEADER include/spdk/zipf.h 00:03:04.427 CXX test/cpp_headers/accel_module.o 00:03:04.427 CXX test/cpp_headers/assert.o 00:03:04.427 CXX test/cpp_headers/accel.o 00:03:04.427 CXX test/cpp_headers/barrier.o 00:03:04.427 CXX test/cpp_headers/bdev_zone.o 00:03:04.427 CXX test/cpp_headers/bdev.o 00:03:04.427 CXX test/cpp_headers/base64.o 00:03:04.427 CXX test/cpp_headers/bdev_module.o 00:03:04.427 CXX test/cpp_headers/bit_array.o 00:03:04.427 CXX test/cpp_headers/blobfs.o 00:03:04.427 CXX test/cpp_headers/blob_bdev.o 00:03:04.427 CXX test/cpp_headers/bit_pool.o 00:03:04.427 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.427 CXX test/cpp_headers/blob.o 00:03:04.427 CXX test/cpp_headers/config.o 00:03:04.427 CXX test/cpp_headers/conf.o 00:03:04.427 CXX test/cpp_headers/cpuset.o 00:03:04.427 CXX test/cpp_headers/crc16.o 00:03:04.427 CXX test/cpp_headers/dif.o 00:03:04.427 CXX test/cpp_headers/crc64.o 00:03:04.427 CXX test/cpp_headers/dma.o 00:03:04.427 CXX test/cpp_headers/crc32.o 00:03:04.427 CXX test/cpp_headers/endian.o 00:03:04.427 CXX test/cpp_headers/env_dpdk.o 00:03:04.427 CXX test/cpp_headers/event.o 00:03:04.427 CXX test/cpp_headers/env.o 00:03:04.427 CXX test/cpp_headers/fd_group.o 00:03:04.427 CXX test/cpp_headers/fd.o 00:03:04.427 CXX test/cpp_headers/fsdev.o 00:03:04.427 CXX test/cpp_headers/fsdev_module.o 00:03:04.427 CXX test/cpp_headers/file.o 00:03:04.427 CXX test/cpp_headers/ftl.o 00:03:04.427 CXX test/cpp_headers/fuse_dispatcher.o 00:03:04.427 CXX test/cpp_headers/gpt_spec.o 00:03:04.427 CXX test/cpp_headers/hexlify.o 00:03:04.427 CXX test/cpp_headers/histogram_data.o 00:03:04.427 CXX test/cpp_headers/idxd.o 00:03:04.427 CXX test/cpp_headers/idxd_spec.o 00:03:04.427 CXX test/cpp_headers/init.o 00:03:04.427 CXX test/cpp_headers/ioat.o 00:03:04.427 CXX test/cpp_headers/ioat_spec.o 00:03:04.427 CXX test/cpp_headers/json.o 00:03:04.427 CXX test/cpp_headers/jsonrpc.o 00:03:04.427 CXX test/cpp_headers/iscsi_spec.o 00:03:04.427 CXX test/cpp_headers/keyring.o 00:03:04.427 CXX test/cpp_headers/keyring_module.o 00:03:04.427 CXX test/cpp_headers/likely.o 00:03:04.427 CXX test/cpp_headers/log.o 00:03:04.427 CXX test/cpp_headers/lvol.o 00:03:04.427 CXX test/cpp_headers/memory.o 00:03:04.427 CXX test/cpp_headers/md5.o 00:03:04.427 CXX test/cpp_headers/mmio.o 00:03:04.427 CXX test/cpp_headers/nbd.o 00:03:04.427 CXX test/cpp_headers/net.o 00:03:04.427 CXX test/cpp_headers/nvme.o 00:03:04.427 CXX test/cpp_headers/notify.o 00:03:04.427 CXX test/cpp_headers/nvme_intel.o 00:03:04.427 CXX test/cpp_headers/nvme_ocssd.o 00:03:04.427 CXX test/cpp_headers/nvme_spec.o 00:03:04.427 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:04.427 CXX test/cpp_headers/nvmf_cmd.o 00:03:04.427 CXX test/cpp_headers/nvme_zns.o 00:03:04.427 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:04.427 CXX test/cpp_headers/nvmf.o 00:03:04.427 CXX test/cpp_headers/nvmf_spec.o 00:03:04.427 CXX test/cpp_headers/nvmf_transport.o 00:03:04.427 CXX test/cpp_headers/opal.o 00:03:04.427 CXX test/cpp_headers/opal_spec.o 00:03:04.427 CXX test/cpp_headers/pci_ids.o 00:03:04.427 CXX test/cpp_headers/pipe.o 00:03:04.427 CXX test/cpp_headers/reduce.o 00:03:04.427 CXX test/cpp_headers/queue.o 00:03:04.427 CXX test/cpp_headers/rpc.o 00:03:04.427 CXX test/cpp_headers/scheduler.o 00:03:04.427 CXX test/cpp_headers/scsi.o 00:03:04.427 CXX test/cpp_headers/scsi_spec.o 00:03:04.427 CXX test/cpp_headers/sock.o 00:03:04.427 CC test/app/histogram_perf/histogram_perf.o 00:03:04.427 CXX test/cpp_headers/stdinc.o 00:03:04.427 CXX test/cpp_headers/string.o 00:03:04.427 CXX test/cpp_headers/thread.o 00:03:04.427 CXX test/cpp_headers/trace.o 00:03:04.427 CC test/app/stub/stub.o 00:03:04.427 CC examples/util/zipf/zipf.o 00:03:04.427 CXX test/cpp_headers/trace_parser.o 00:03:04.427 CXX test/cpp_headers/tree.o 00:03:04.427 CC test/thread/poller_perf/poller_perf.o 00:03:04.427 CC test/env/vtophys/vtophys.o 00:03:04.427 CC app/fio/nvme/fio_plugin.o 00:03:04.427 CC test/app/jsoncat/jsoncat.o 00:03:04.427 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.702 CC test/env/pci/pci_ut.o 00:03:04.702 CC test/env/memory/memory_ut.o 00:03:04.702 CXX test/cpp_headers/ublk.o 00:03:04.702 CC app/fio/bdev/fio_plugin.o 00:03:04.702 CC test/dma/test_dma/test_dma.o 00:03:04.702 CC examples/ioat/perf/perf.o 00:03:04.702 CC test/app/bdev_svc/bdev_svc.o 00:03:04.703 CC examples/ioat/verify/verify.o 00:03:04.703 LINK spdk_lspci 00:03:04.703 CXX test/cpp_headers/util.o 00:03:04.982 LINK interrupt_tgt 00:03:04.982 LINK nvmf_tgt 00:03:04.982 LINK rpc_client_test 00:03:05.243 LINK spdk_nvme_discover 00:03:05.243 LINK poller_perf 00:03:05.243 LINK spdk_tgt 00:03:05.243 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:05.243 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:05.243 LINK zipf 00:03:05.243 LINK jsoncat 00:03:05.243 CC test/env/mem_callbacks/mem_callbacks.o 00:03:05.243 LINK env_dpdk_post_init 00:03:05.243 LINK stub 00:03:05.243 CXX test/cpp_headers/uuid.o 00:03:05.243 CXX test/cpp_headers/vfio_user_pci.o 00:03:05.243 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.243 CXX test/cpp_headers/version.o 00:03:05.243 CXX test/cpp_headers/vhost.o 00:03:05.243 LINK iscsi_tgt 00:03:05.243 CXX test/cpp_headers/vmd.o 00:03:05.243 CXX test/cpp_headers/xor.o 00:03:05.243 CXX test/cpp_headers/zipf.o 00:03:05.243 LINK histogram_perf 00:03:05.243 LINK vtophys 00:03:05.243 LINK spdk_trace_record 00:03:05.501 LINK bdev_svc 00:03:05.501 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:05.501 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:05.501 LINK ioat_perf 00:03:05.501 LINK verify 00:03:05.501 LINK spdk_dd 00:03:05.501 LINK spdk_trace 00:03:05.501 LINK pci_ut 00:03:05.760 CC examples/vmd/led/led.o 00:03:05.760 LINK test_dma 00:03:05.760 LINK spdk_nvme 00:03:05.760 CC examples/vmd/lsvmd/lsvmd.o 00:03:05.760 LINK spdk_bdev 00:03:05.760 CC examples/idxd/perf/perf.o 00:03:05.760 CC examples/sock/hello_world/hello_sock.o 00:03:05.760 LINK nvme_fuzz 00:03:05.760 CC examples/thread/thread/thread_ex.o 00:03:05.760 CC test/event/event_perf/event_perf.o 00:03:05.760 LINK vhost_fuzz 00:03:05.760 CC test/event/reactor_perf/reactor_perf.o 00:03:05.760 LINK mem_callbacks 00:03:05.760 CC test/event/reactor/reactor.o 00:03:05.760 CC test/event/app_repeat/app_repeat.o 00:03:05.760 CC app/vhost/vhost.o 00:03:05.760 CC test/event/scheduler/scheduler.o 00:03:05.760 LINK spdk_nvme_identify 00:03:05.760 LINK led 00:03:06.018 LINK lsvmd 00:03:06.018 LINK event_perf 00:03:06.018 LINK spdk_nvme_perf 00:03:06.018 LINK reactor 00:03:06.019 LINK reactor_perf 00:03:06.019 LINK spdk_top 00:03:06.019 LINK vhost 00:03:06.019 LINK app_repeat 00:03:06.019 LINK hello_sock 00:03:06.019 LINK thread 00:03:06.019 LINK scheduler 00:03:06.019 LINK idxd_perf 00:03:06.277 CC test/nvme/e2edp/nvme_dp.o 00:03:06.278 CC test/nvme/simple_copy/simple_copy.o 00:03:06.278 CC test/nvme/connect_stress/connect_stress.o 00:03:06.278 CC test/nvme/aer/aer.o 00:03:06.278 CC test/nvme/err_injection/err_injection.o 00:03:06.278 CC test/nvme/cuse/cuse.o 00:03:06.278 CC test/nvme/startup/startup.o 00:03:06.278 CC test/nvme/boot_partition/boot_partition.o 00:03:06.278 CC test/nvme/overhead/overhead.o 00:03:06.278 CC test/nvme/fdp/fdp.o 00:03:06.278 CC test/nvme/reset/reset.o 00:03:06.278 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.278 CC test/nvme/reserve/reserve.o 00:03:06.278 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.278 CC test/nvme/compliance/nvme_compliance.o 00:03:06.278 CC test/nvme/sgl/sgl.o 00:03:06.278 CC test/blobfs/mkfs/mkfs.o 00:03:06.278 LINK memory_ut 00:03:06.278 CC test/accel/dif/dif.o 00:03:06.278 CC test/lvol/esnap/esnap.o 00:03:06.537 LINK startup 00:03:06.537 LINK connect_stress 00:03:06.537 LINK boot_partition 00:03:06.537 LINK err_injection 00:03:06.537 LINK doorbell_aers 00:03:06.537 LINK fused_ordering 00:03:06.537 CC examples/nvme/arbitration/arbitration.o 00:03:06.537 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.537 LINK mkfs 00:03:06.537 CC examples/nvme/hotplug/hotplug.o 00:03:06.537 CC examples/nvme/reconnect/reconnect.o 00:03:06.537 CC examples/nvme/hello_world/hello_world.o 00:03:06.537 CC examples/nvme/abort/abort.o 00:03:06.537 LINK reserve 00:03:06.537 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.537 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.537 LINK simple_copy 00:03:06.537 LINK nvme_dp 00:03:06.537 LINK reset 00:03:06.537 LINK sgl 00:03:06.537 LINK overhead 00:03:06.537 CC examples/accel/perf/accel_perf.o 00:03:06.537 LINK aer 00:03:06.537 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:06.537 CC examples/blob/cli/blobcli.o 00:03:06.537 LINK fdp 00:03:06.537 LINK nvme_compliance 00:03:06.537 CC examples/blob/hello_world/hello_blob.o 00:03:06.796 LINK cmb_copy 00:03:06.796 LINK pmr_persistence 00:03:06.796 LINK hello_world 00:03:06.796 LINK hotplug 00:03:06.796 LINK arbitration 00:03:06.796 LINK hello_blob 00:03:06.796 LINK reconnect 00:03:06.796 LINK hello_fsdev 00:03:06.796 LINK abort 00:03:07.056 LINK nvme_manage 00:03:07.056 LINK dif 00:03:07.056 LINK iscsi_fuzz 00:03:07.056 LINK accel_perf 00:03:07.056 LINK blobcli 00:03:07.315 LINK cuse 00:03:07.574 CC examples/bdev/hello_world/hello_bdev.o 00:03:07.574 CC examples/bdev/bdevperf/bdevperf.o 00:03:07.574 CC test/bdev/bdevio/bdevio.o 00:03:07.833 LINK hello_bdev 00:03:08.093 LINK bdevio 00:03:08.353 LINK bdevperf 00:03:08.921 CC examples/nvmf/nvmf/nvmf.o 00:03:09.181 LINK nvmf 00:03:11.088 LINK esnap 00:03:11.349 00:03:11.349 real 0m59.093s 00:03:11.349 user 8m17.288s 00:03:11.349 sys 4m15.231s 00:03:11.349 15:39:56 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:11.349 15:39:56 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.349 ************************************ 00:03:11.349 END TEST make 00:03:11.349 ************************************ 00:03:11.349 15:39:56 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.349 15:39:56 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.349 15:39:56 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.349 15:39:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.349 15:39:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.349 15:39:56 -- pm/common@44 -- $ pid=3032500 00:03:11.349 15:39:56 -- pm/common@50 -- $ kill -TERM 3032500 00:03:11.349 15:39:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.349 15:39:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.349 15:39:56 -- pm/common@44 -- $ pid=3032502 00:03:11.349 15:39:56 -- pm/common@50 -- $ kill -TERM 3032502 00:03:11.349 15:39:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.349 15:39:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:11.349 15:39:56 -- pm/common@44 -- $ pid=3032504 00:03:11.349 15:39:56 -- pm/common@50 -- $ kill -TERM 3032504 00:03:11.349 15:39:56 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.349 15:39:56 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:11.349 15:39:56 -- pm/common@44 -- $ pid=3032526 00:03:11.349 15:39:56 -- pm/common@50 -- $ sudo -E kill -TERM 3032526 00:03:11.349 15:39:56 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:11.349 15:39:56 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:03:11.609 15:39:56 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:11.609 15:39:56 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:11.609 15:39:56 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:11.609 15:39:56 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:11.609 15:39:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:11.609 15:39:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:11.609 15:39:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:11.609 15:39:56 -- scripts/common.sh@336 -- # IFS=.-: 00:03:11.609 15:39:56 -- scripts/common.sh@336 -- # read -ra ver1 00:03:11.609 15:39:56 -- scripts/common.sh@337 -- # IFS=.-: 00:03:11.609 15:39:56 -- scripts/common.sh@337 -- # read -ra ver2 00:03:11.609 15:39:56 -- scripts/common.sh@338 -- # local 'op=<' 00:03:11.609 15:39:56 -- scripts/common.sh@340 -- # ver1_l=2 00:03:11.609 15:39:56 -- scripts/common.sh@341 -- # ver2_l=1 00:03:11.609 15:39:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:11.609 15:39:56 -- scripts/common.sh@344 -- # case "$op" in 00:03:11.609 15:39:56 -- scripts/common.sh@345 -- # : 1 00:03:11.609 15:39:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:11.609 15:39:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:11.609 15:39:56 -- scripts/common.sh@365 -- # decimal 1 00:03:11.609 15:39:56 -- scripts/common.sh@353 -- # local d=1 00:03:11.609 15:39:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:11.609 15:39:57 -- scripts/common.sh@355 -- # echo 1 00:03:11.609 15:39:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:11.609 15:39:57 -- scripts/common.sh@366 -- # decimal 2 00:03:11.609 15:39:57 -- scripts/common.sh@353 -- # local d=2 00:03:11.609 15:39:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:11.609 15:39:57 -- scripts/common.sh@355 -- # echo 2 00:03:11.609 15:39:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:11.609 15:39:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:11.609 15:39:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:11.609 15:39:57 -- scripts/common.sh@368 -- # return 0 00:03:11.609 15:39:57 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:11.609 15:39:57 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:11.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.609 --rc genhtml_branch_coverage=1 00:03:11.609 --rc genhtml_function_coverage=1 00:03:11.609 --rc genhtml_legend=1 00:03:11.609 --rc geninfo_all_blocks=1 00:03:11.609 --rc geninfo_unexecuted_blocks=1 00:03:11.609 00:03:11.609 ' 00:03:11.609 15:39:57 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:11.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.609 --rc genhtml_branch_coverage=1 00:03:11.609 --rc genhtml_function_coverage=1 00:03:11.609 --rc genhtml_legend=1 00:03:11.609 --rc geninfo_all_blocks=1 00:03:11.609 --rc geninfo_unexecuted_blocks=1 00:03:11.609 00:03:11.609 ' 00:03:11.609 15:39:57 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:11.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.609 --rc genhtml_branch_coverage=1 00:03:11.609 --rc genhtml_function_coverage=1 00:03:11.609 --rc genhtml_legend=1 00:03:11.609 --rc geninfo_all_blocks=1 00:03:11.609 --rc geninfo_unexecuted_blocks=1 00:03:11.609 00:03:11.609 ' 00:03:11.609 15:39:57 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:11.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:11.609 --rc genhtml_branch_coverage=1 00:03:11.609 --rc genhtml_function_coverage=1 00:03:11.609 --rc genhtml_legend=1 00:03:11.609 --rc geninfo_all_blocks=1 00:03:11.609 --rc geninfo_unexecuted_blocks=1 00:03:11.609 00:03:11.609 ' 00:03:11.609 15:39:57 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:11.609 15:39:57 -- nvmf/common.sh@7 -- # uname -s 00:03:11.609 15:39:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:11.609 15:39:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:11.609 15:39:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:11.609 15:39:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:11.609 15:39:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:11.609 15:39:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:11.609 15:39:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:11.609 15:39:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:11.609 15:39:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:11.609 15:39:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:11.609 15:39:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:11.609 15:39:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:11.609 15:39:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:11.609 15:39:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:11.609 15:39:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:11.609 15:39:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:11.609 15:39:57 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:11.609 15:39:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:11.609 15:39:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:11.610 15:39:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:11.610 15:39:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:11.610 15:39:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.610 15:39:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.610 15:39:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.610 15:39:57 -- paths/export.sh@5 -- # export PATH 00:03:11.610 15:39:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:11.610 15:39:57 -- nvmf/common.sh@51 -- # : 0 00:03:11.610 15:39:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:11.610 15:39:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:11.610 15:39:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:11.610 15:39:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:11.610 15:39:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:11.610 15:39:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:11.610 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:11.610 15:39:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:11.610 15:39:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:11.610 15:39:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:11.610 15:39:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:11.610 15:39:57 -- spdk/autotest.sh@32 -- # uname -s 00:03:11.610 15:39:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:11.610 15:39:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:11.610 15:39:57 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:11.610 15:39:57 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:11.610 15:39:57 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:11.610 15:39:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:11.610 15:39:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:11.610 15:39:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:11.610 15:39:57 -- spdk/autotest.sh@48 -- # udevadm_pid=3096474 00:03:11.610 15:39:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:11.610 15:39:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:11.610 15:39:57 -- pm/common@17 -- # local monitor 00:03:11.610 15:39:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.610 15:39:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.610 15:39:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.610 15:39:57 -- pm/common@21 -- # date +%s 00:03:11.610 15:39:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.610 15:39:57 -- pm/common@21 -- # date +%s 00:03:11.610 15:39:57 -- pm/common@25 -- # sleep 1 00:03:11.610 15:39:57 -- pm/common@21 -- # date +%s 00:03:11.610 15:39:57 -- pm/common@21 -- # date +%s 00:03:11.610 15:39:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731854397 00:03:11.610 15:39:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731854397 00:03:11.610 15:39:57 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731854397 00:03:11.610 15:39:57 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731854397 00:03:11.610 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731854397_collect-cpu-load.pm.log 00:03:11.610 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731854397_collect-vmstat.pm.log 00:03:11.610 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731854397_collect-cpu-temp.pm.log 00:03:11.610 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731854397_collect-bmc-pm.bmc.pm.log 00:03:12.548 15:39:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:12.548 15:39:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:12.548 15:39:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:12.548 15:39:58 -- common/autotest_common.sh@10 -- # set +x 00:03:12.548 15:39:58 -- spdk/autotest.sh@59 -- # create_test_list 00:03:12.548 15:39:58 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:12.548 15:39:58 -- common/autotest_common.sh@10 -- # set +x 00:03:12.808 15:39:58 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:12.808 15:39:58 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:12.808 15:39:58 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:12.808 15:39:58 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:12.808 15:39:58 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:12.808 15:39:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:12.808 15:39:58 -- common/autotest_common.sh@1457 -- # uname 00:03:12.808 15:39:58 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:12.808 15:39:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:12.808 15:39:58 -- common/autotest_common.sh@1477 -- # uname 00:03:12.808 15:39:58 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:12.808 15:39:58 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:12.808 15:39:58 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:12.808 lcov: LCOV version 1.15 00:03:12.808 15:39:58 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:34.755 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:34.755 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:37.290 15:40:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:37.291 15:40:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:37.291 15:40:22 -- common/autotest_common.sh@10 -- # set +x 00:03:37.549 15:40:22 -- spdk/autotest.sh@78 -- # rm -f 00:03:37.549 15:40:22 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:40.879 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:40.879 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:40.879 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:40.879 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:40.879 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:40.879 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:40.879 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:40.879 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:40.879 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:41.157 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:41.157 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:41.157 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:41.157 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:41.157 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:41.157 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:41.157 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:41.157 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:41.157 15:40:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:41.157 15:40:26 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:41.157 15:40:26 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:41.157 15:40:26 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:41.157 15:40:26 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:41.157 15:40:26 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:41.157 15:40:26 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:41.157 15:40:26 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.157 15:40:26 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:41.157 15:40:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:41.157 15:40:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.157 15:40:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:41.157 15:40:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:41.157 15:40:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:41.157 15:40:26 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:41.157 No valid GPT data, bailing 00:03:41.420 15:40:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:41.420 15:40:26 -- scripts/common.sh@394 -- # pt= 00:03:41.420 15:40:26 -- scripts/common.sh@395 -- # return 1 00:03:41.420 15:40:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:41.420 1+0 records in 00:03:41.420 1+0 records out 00:03:41.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475996 s, 220 MB/s 00:03:41.420 15:40:26 -- spdk/autotest.sh@105 -- # sync 00:03:41.420 15:40:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:41.420 15:40:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:41.420 15:40:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:49.539 15:40:34 -- spdk/autotest.sh@111 -- # uname -s 00:03:49.539 15:40:34 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:49.539 15:40:34 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:49.539 15:40:34 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:52.075 Hugepages 00:03:52.075 node hugesize free / total 00:03:52.075 node0 1048576kB 0 / 0 00:03:52.075 node0 2048kB 0 / 0 00:03:52.075 node1 1048576kB 0 / 0 00:03:52.075 node1 2048kB 0 / 0 00:03:52.075 00:03:52.075 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:52.075 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:52.075 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:52.075 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:52.075 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:52.075 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:52.075 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:52.075 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:52.075 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:52.075 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:52.075 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:52.075 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:52.075 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:52.075 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:52.075 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:52.075 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:52.075 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:52.075 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:52.075 15:40:37 -- spdk/autotest.sh@117 -- # uname -s 00:03:52.334 15:40:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:52.334 15:40:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:52.334 15:40:37 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:55.623 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:55.623 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:57.527 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:57.527 15:40:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:58.464 15:40:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:58.464 15:40:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:58.464 15:40:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:58.464 15:40:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:58.464 15:40:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:58.464 15:40:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:58.465 15:40:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:58.465 15:40:43 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:58.465 15:40:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:58.724 15:40:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:58.724 15:40:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:03:58.724 15:40:44 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.015 Waiting for block devices as requested 00:04:02.015 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:02.015 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:02.015 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:02.015 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:02.274 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:02.274 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:02.274 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:02.533 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:02.533 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:02.533 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:02.792 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:02.792 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:02.792 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:03.052 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:03.052 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:03.052 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:03.311 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:03.311 15:40:48 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:03.311 15:40:48 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:03.311 15:40:48 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:03.311 15:40:48 -- common/autotest_common.sh@1487 -- # grep 0000:d8:00.0/nvme/nvme 00:04:03.311 15:40:48 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:03.311 15:40:48 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:03.311 15:40:48 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:03.311 15:40:48 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:03.311 15:40:48 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:03.311 15:40:48 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:03.311 15:40:48 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:03.311 15:40:48 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:03.311 15:40:48 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:03.311 15:40:48 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:03.311 15:40:48 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:03.311 15:40:48 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:03.571 15:40:48 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:03.571 15:40:48 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:03.571 15:40:48 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:03.571 15:40:48 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:03.571 15:40:48 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:03.571 15:40:48 -- common/autotest_common.sh@1543 -- # continue 00:04:03.571 15:40:48 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:03.571 15:40:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:03.571 15:40:48 -- common/autotest_common.sh@10 -- # set +x 00:04:03.571 15:40:48 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:03.571 15:40:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:03.571 15:40:48 -- common/autotest_common.sh@10 -- # set +x 00:04:03.571 15:40:48 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:06.860 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:06.860 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.397 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:09.397 15:40:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:09.397 15:40:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:09.397 15:40:54 -- common/autotest_common.sh@10 -- # set +x 00:04:09.397 15:40:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:09.397 15:40:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:09.397 15:40:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:09.397 15:40:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:09.397 15:40:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:09.398 15:40:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:09.398 15:40:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:09.398 15:40:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:09.398 15:40:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:09.398 15:40:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:09.398 15:40:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:09.398 15:40:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:09.398 15:40:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:09.398 15:40:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:09.398 15:40:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:04:09.398 15:40:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:09.398 15:40:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:09.398 15:40:54 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:09.398 15:40:54 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:09.398 15:40:54 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:09.398 15:40:54 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:09.398 15:40:54 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:d8:00.0 00:04:09.398 15:40:54 -- common/autotest_common.sh@1579 -- # [[ -z 0000:d8:00.0 ]] 00:04:09.398 15:40:54 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3112432 00:04:09.398 15:40:54 -- common/autotest_common.sh@1585 -- # waitforlisten 3112432 00:04:09.398 15:40:54 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.398 15:40:54 -- common/autotest_common.sh@835 -- # '[' -z 3112432 ']' 00:04:09.398 15:40:54 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.398 15:40:54 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:09.398 15:40:54 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.398 15:40:54 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:09.398 15:40:54 -- common/autotest_common.sh@10 -- # set +x 00:04:09.398 [2024-11-17 15:40:54.710482] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:09.398 [2024-11-17 15:40:54.710582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112432 ] 00:04:09.398 [2024-11-17 15:40:54.841757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.398 [2024-11-17 15:40:54.939415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.334 15:40:55 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:10.334 15:40:55 -- common/autotest_common.sh@868 -- # return 0 00:04:10.334 15:40:55 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:10.334 15:40:55 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:10.334 15:40:55 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:13.623 nvme0n1 00:04:13.624 15:40:58 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:13.624 [2024-11-17 15:40:58.914711] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:13.624 request: 00:04:13.624 { 00:04:13.624 "nvme_ctrlr_name": "nvme0", 00:04:13.624 "password": "test", 00:04:13.624 "method": "bdev_nvme_opal_revert", 00:04:13.624 "req_id": 1 00:04:13.624 } 00:04:13.624 Got JSON-RPC error response 00:04:13.624 response: 00:04:13.624 { 00:04:13.624 "code": -32602, 00:04:13.624 "message": "Invalid parameters" 00:04:13.624 } 00:04:13.624 15:40:58 -- common/autotest_common.sh@1591 -- # true 00:04:13.624 15:40:58 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:13.624 15:40:58 -- common/autotest_common.sh@1595 -- # killprocess 3112432 00:04:13.624 15:40:58 -- common/autotest_common.sh@954 -- # '[' -z 3112432 ']' 00:04:13.624 15:40:58 -- common/autotest_common.sh@958 -- # kill -0 3112432 00:04:13.624 15:40:58 -- common/autotest_common.sh@959 -- # uname 00:04:13.624 15:40:58 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.624 15:40:58 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3112432 00:04:13.624 15:40:58 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.624 15:40:58 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.624 15:40:58 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3112432' 00:04:13.624 killing process with pid 3112432 00:04:13.624 15:40:58 -- common/autotest_common.sh@973 -- # kill 3112432 00:04:13.624 15:40:58 -- common/autotest_common.sh@978 -- # wait 3112432 00:04:17.815 15:41:03 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:17.815 15:41:03 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:17.815 15:41:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:17.815 15:41:03 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:17.816 15:41:03 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:17.816 15:41:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.816 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:04:17.816 15:41:03 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:17.816 15:41:03 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:17.816 15:41:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.816 15:41:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.816 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:04:17.816 ************************************ 00:04:17.816 START TEST env 00:04:17.816 ************************************ 00:04:17.816 15:41:03 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:04:18.075 * Looking for test storage... 00:04:18.075 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.075 15:41:03 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.075 15:41:03 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.075 15:41:03 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.075 15:41:03 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.075 15:41:03 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.075 15:41:03 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.075 15:41:03 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.075 15:41:03 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.075 15:41:03 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.075 15:41:03 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.075 15:41:03 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.075 15:41:03 env -- scripts/common.sh@344 -- # case "$op" in 00:04:18.075 15:41:03 env -- scripts/common.sh@345 -- # : 1 00:04:18.075 15:41:03 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.075 15:41:03 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.075 15:41:03 env -- scripts/common.sh@365 -- # decimal 1 00:04:18.075 15:41:03 env -- scripts/common.sh@353 -- # local d=1 00:04:18.075 15:41:03 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.075 15:41:03 env -- scripts/common.sh@355 -- # echo 1 00:04:18.075 15:41:03 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.075 15:41:03 env -- scripts/common.sh@366 -- # decimal 2 00:04:18.075 15:41:03 env -- scripts/common.sh@353 -- # local d=2 00:04:18.075 15:41:03 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.075 15:41:03 env -- scripts/common.sh@355 -- # echo 2 00:04:18.075 15:41:03 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.075 15:41:03 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.075 15:41:03 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.075 15:41:03 env -- scripts/common.sh@368 -- # return 0 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.075 --rc genhtml_branch_coverage=1 00:04:18.075 --rc genhtml_function_coverage=1 00:04:18.075 --rc genhtml_legend=1 00:04:18.075 --rc geninfo_all_blocks=1 00:04:18.075 --rc geninfo_unexecuted_blocks=1 00:04:18.075 00:04:18.075 ' 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.075 --rc genhtml_branch_coverage=1 00:04:18.075 --rc genhtml_function_coverage=1 00:04:18.075 --rc genhtml_legend=1 00:04:18.075 --rc geninfo_all_blocks=1 00:04:18.075 --rc geninfo_unexecuted_blocks=1 00:04:18.075 00:04:18.075 ' 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.075 --rc genhtml_branch_coverage=1 00:04:18.075 --rc genhtml_function_coverage=1 00:04:18.075 --rc genhtml_legend=1 00:04:18.075 --rc geninfo_all_blocks=1 00:04:18.075 --rc geninfo_unexecuted_blocks=1 00:04:18.075 00:04:18.075 ' 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.075 --rc genhtml_branch_coverage=1 00:04:18.075 --rc genhtml_function_coverage=1 00:04:18.075 --rc genhtml_legend=1 00:04:18.075 --rc geninfo_all_blocks=1 00:04:18.075 --rc geninfo_unexecuted_blocks=1 00:04:18.075 00:04:18.075 ' 00:04:18.075 15:41:03 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.075 15:41:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.075 15:41:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.075 ************************************ 00:04:18.075 START TEST env_memory 00:04:18.075 ************************************ 00:04:18.075 15:41:03 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:04:18.075 00:04:18.075 00:04:18.075 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.075 http://cunit.sourceforge.net/ 00:04:18.075 00:04:18.075 00:04:18.075 Suite: memory 00:04:18.335 Test: alloc and free memory map ...[2024-11-17 15:41:03.627885] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:18.335 passed 00:04:18.335 Test: mem map translation ...[2024-11-17 15:41:03.663830] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:18.335 [2024-11-17 15:41:03.663856] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:18.335 [2024-11-17 15:41:03.663910] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:18.335 [2024-11-17 15:41:03.663928] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:18.335 passed 00:04:18.335 Test: mem map registration ...[2024-11-17 15:41:03.718069] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:18.335 [2024-11-17 15:41:03.718096] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:18.335 passed 00:04:18.335 Test: mem map adjacent registrations ...passed 00:04:18.335 00:04:18.335 Run Summary: Type Total Ran Passed Failed Inactive 00:04:18.335 suites 1 1 n/a 0 0 00:04:18.335 tests 4 4 4 0 0 00:04:18.335 asserts 152 152 152 0 n/a 00:04:18.335 00:04:18.335 Elapsed time = 0.197 seconds 00:04:18.335 00:04:18.335 real 0m0.237s 00:04:18.335 user 0m0.213s 00:04:18.335 sys 0m0.023s 00:04:18.335 15:41:03 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.335 15:41:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:18.335 ************************************ 00:04:18.335 END TEST env_memory 00:04:18.335 ************************************ 00:04:18.335 15:41:03 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:18.335 15:41:03 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.335 15:41:03 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.335 15:41:03 env -- common/autotest_common.sh@10 -- # set +x 00:04:18.594 ************************************ 00:04:18.594 START TEST env_vtophys 00:04:18.594 ************************************ 00:04:18.594 15:41:03 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:18.594 EAL: lib.eal log level changed from notice to debug 00:04:18.594 EAL: Detected lcore 0 as core 0 on socket 0 00:04:18.594 EAL: Detected lcore 1 as core 1 on socket 0 00:04:18.594 EAL: Detected lcore 2 as core 2 on socket 0 00:04:18.594 EAL: Detected lcore 3 as core 3 on socket 0 00:04:18.594 EAL: Detected lcore 4 as core 4 on socket 0 00:04:18.594 EAL: Detected lcore 5 as core 5 on socket 0 00:04:18.594 EAL: Detected lcore 6 as core 6 on socket 0 00:04:18.594 EAL: Detected lcore 7 as core 8 on socket 0 00:04:18.594 EAL: Detected lcore 8 as core 9 on socket 0 00:04:18.594 EAL: Detected lcore 9 as core 10 on socket 0 00:04:18.594 EAL: Detected lcore 10 as core 11 on socket 0 00:04:18.594 EAL: Detected lcore 11 as core 12 on socket 0 00:04:18.594 EAL: Detected lcore 12 as core 13 on socket 0 00:04:18.594 EAL: Detected lcore 13 as core 14 on socket 0 00:04:18.594 EAL: Detected lcore 14 as core 16 on socket 0 00:04:18.594 EAL: Detected lcore 15 as core 17 on socket 0 00:04:18.594 EAL: Detected lcore 16 as core 18 on socket 0 00:04:18.594 EAL: Detected lcore 17 as core 19 on socket 0 00:04:18.594 EAL: Detected lcore 18 as core 20 on socket 0 00:04:18.594 EAL: Detected lcore 19 as core 21 on socket 0 00:04:18.594 EAL: Detected lcore 20 as core 22 on socket 0 00:04:18.594 EAL: Detected lcore 21 as core 24 on socket 0 00:04:18.594 EAL: Detected lcore 22 as core 25 on socket 0 00:04:18.594 EAL: Detected lcore 23 as core 26 on socket 0 00:04:18.594 EAL: Detected lcore 24 as core 27 on socket 0 00:04:18.594 EAL: Detected lcore 25 as core 28 on socket 0 00:04:18.594 EAL: Detected lcore 26 as core 29 on socket 0 00:04:18.594 EAL: Detected lcore 27 as core 30 on socket 0 00:04:18.594 EAL: Detected lcore 28 as core 0 on socket 1 00:04:18.594 EAL: Detected lcore 29 as core 1 on socket 1 00:04:18.594 EAL: Detected lcore 30 as core 2 on socket 1 00:04:18.594 EAL: Detected lcore 31 as core 3 on socket 1 00:04:18.594 EAL: Detected lcore 32 as core 4 on socket 1 00:04:18.594 EAL: Detected lcore 33 as core 5 on socket 1 00:04:18.594 EAL: Detected lcore 34 as core 6 on socket 1 00:04:18.594 EAL: Detected lcore 35 as core 8 on socket 1 00:04:18.594 EAL: Detected lcore 36 as core 9 on socket 1 00:04:18.594 EAL: Detected lcore 37 as core 10 on socket 1 00:04:18.594 EAL: Detected lcore 38 as core 11 on socket 1 00:04:18.594 EAL: Detected lcore 39 as core 12 on socket 1 00:04:18.594 EAL: Detected lcore 40 as core 13 on socket 1 00:04:18.594 EAL: Detected lcore 41 as core 14 on socket 1 00:04:18.594 EAL: Detected lcore 42 as core 16 on socket 1 00:04:18.594 EAL: Detected lcore 43 as core 17 on socket 1 00:04:18.594 EAL: Detected lcore 44 as core 18 on socket 1 00:04:18.594 EAL: Detected lcore 45 as core 19 on socket 1 00:04:18.594 EAL: Detected lcore 46 as core 20 on socket 1 00:04:18.594 EAL: Detected lcore 47 as core 21 on socket 1 00:04:18.594 EAL: Detected lcore 48 as core 22 on socket 1 00:04:18.594 EAL: Detected lcore 49 as core 24 on socket 1 00:04:18.594 EAL: Detected lcore 50 as core 25 on socket 1 00:04:18.594 EAL: Detected lcore 51 as core 26 on socket 1 00:04:18.594 EAL: Detected lcore 52 as core 27 on socket 1 00:04:18.594 EAL: Detected lcore 53 as core 28 on socket 1 00:04:18.594 EAL: Detected lcore 54 as core 29 on socket 1 00:04:18.594 EAL: Detected lcore 55 as core 30 on socket 1 00:04:18.594 EAL: Detected lcore 56 as core 0 on socket 0 00:04:18.594 EAL: Detected lcore 57 as core 1 on socket 0 00:04:18.594 EAL: Detected lcore 58 as core 2 on socket 0 00:04:18.594 EAL: Detected lcore 59 as core 3 on socket 0 00:04:18.594 EAL: Detected lcore 60 as core 4 on socket 0 00:04:18.594 EAL: Detected lcore 61 as core 5 on socket 0 00:04:18.594 EAL: Detected lcore 62 as core 6 on socket 0 00:04:18.594 EAL: Detected lcore 63 as core 8 on socket 0 00:04:18.594 EAL: Detected lcore 64 as core 9 on socket 0 00:04:18.594 EAL: Detected lcore 65 as core 10 on socket 0 00:04:18.594 EAL: Detected lcore 66 as core 11 on socket 0 00:04:18.594 EAL: Detected lcore 67 as core 12 on socket 0 00:04:18.594 EAL: Detected lcore 68 as core 13 on socket 0 00:04:18.594 EAL: Detected lcore 69 as core 14 on socket 0 00:04:18.594 EAL: Detected lcore 70 as core 16 on socket 0 00:04:18.594 EAL: Detected lcore 71 as core 17 on socket 0 00:04:18.594 EAL: Detected lcore 72 as core 18 on socket 0 00:04:18.594 EAL: Detected lcore 73 as core 19 on socket 0 00:04:18.594 EAL: Detected lcore 74 as core 20 on socket 0 00:04:18.594 EAL: Detected lcore 75 as core 21 on socket 0 00:04:18.594 EAL: Detected lcore 76 as core 22 on socket 0 00:04:18.594 EAL: Detected lcore 77 as core 24 on socket 0 00:04:18.594 EAL: Detected lcore 78 as core 25 on socket 0 00:04:18.594 EAL: Detected lcore 79 as core 26 on socket 0 00:04:18.594 EAL: Detected lcore 80 as core 27 on socket 0 00:04:18.594 EAL: Detected lcore 81 as core 28 on socket 0 00:04:18.594 EAL: Detected lcore 82 as core 29 on socket 0 00:04:18.594 EAL: Detected lcore 83 as core 30 on socket 0 00:04:18.594 EAL: Detected lcore 84 as core 0 on socket 1 00:04:18.594 EAL: Detected lcore 85 as core 1 on socket 1 00:04:18.594 EAL: Detected lcore 86 as core 2 on socket 1 00:04:18.594 EAL: Detected lcore 87 as core 3 on socket 1 00:04:18.594 EAL: Detected lcore 88 as core 4 on socket 1 00:04:18.594 EAL: Detected lcore 89 as core 5 on socket 1 00:04:18.594 EAL: Detected lcore 90 as core 6 on socket 1 00:04:18.594 EAL: Detected lcore 91 as core 8 on socket 1 00:04:18.594 EAL: Detected lcore 92 as core 9 on socket 1 00:04:18.594 EAL: Detected lcore 93 as core 10 on socket 1 00:04:18.594 EAL: Detected lcore 94 as core 11 on socket 1 00:04:18.594 EAL: Detected lcore 95 as core 12 on socket 1 00:04:18.594 EAL: Detected lcore 96 as core 13 on socket 1 00:04:18.594 EAL: Detected lcore 97 as core 14 on socket 1 00:04:18.594 EAL: Detected lcore 98 as core 16 on socket 1 00:04:18.594 EAL: Detected lcore 99 as core 17 on socket 1 00:04:18.595 EAL: Detected lcore 100 as core 18 on socket 1 00:04:18.595 EAL: Detected lcore 101 as core 19 on socket 1 00:04:18.595 EAL: Detected lcore 102 as core 20 on socket 1 00:04:18.595 EAL: Detected lcore 103 as core 21 on socket 1 00:04:18.595 EAL: Detected lcore 104 as core 22 on socket 1 00:04:18.595 EAL: Detected lcore 105 as core 24 on socket 1 00:04:18.595 EAL: Detected lcore 106 as core 25 on socket 1 00:04:18.595 EAL: Detected lcore 107 as core 26 on socket 1 00:04:18.595 EAL: Detected lcore 108 as core 27 on socket 1 00:04:18.595 EAL: Detected lcore 109 as core 28 on socket 1 00:04:18.595 EAL: Detected lcore 110 as core 29 on socket 1 00:04:18.595 EAL: Detected lcore 111 as core 30 on socket 1 00:04:18.595 EAL: Maximum logical cores by configuration: 128 00:04:18.595 EAL: Detected CPU lcores: 112 00:04:18.595 EAL: Detected NUMA nodes: 2 00:04:18.595 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:18.595 EAL: Detected shared linkage of DPDK 00:04:18.595 EAL: No shared files mode enabled, IPC will be disabled 00:04:18.595 EAL: Bus pci wants IOVA as 'DC' 00:04:18.595 EAL: Buses did not request a specific IOVA mode. 00:04:18.595 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:18.595 EAL: Selected IOVA mode 'VA' 00:04:18.595 EAL: Probing VFIO support... 00:04:18.595 EAL: IOMMU type 1 (Type 1) is supported 00:04:18.595 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:18.595 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:18.595 EAL: VFIO support initialized 00:04:18.595 EAL: Ask a virtual area of 0x2e000 bytes 00:04:18.595 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:18.595 EAL: Setting up physically contiguous memory... 00:04:18.595 EAL: Setting maximum number of open files to 524288 00:04:18.595 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:18.595 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:18.595 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:18.595 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.595 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:18.595 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.595 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.595 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:18.595 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:18.595 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.595 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:18.595 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.595 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.595 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:18.595 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:18.595 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.595 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:18.595 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.595 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.595 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:18.595 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:18.595 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.595 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:18.595 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:18.595 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.595 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:18.595 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:18.595 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:18.595 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.595 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:18.595 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.595 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.595 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:18.595 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:18.595 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.595 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:18.595 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.595 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.595 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:18.595 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:18.595 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.595 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:18.595 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.595 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.595 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:18.595 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:18.595 EAL: Ask a virtual area of 0x61000 bytes 00:04:18.595 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:18.595 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:18.595 EAL: Ask a virtual area of 0x400000000 bytes 00:04:18.595 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:18.595 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:18.595 EAL: Hugepages will be freed exactly as allocated. 00:04:18.595 EAL: No shared files mode enabled, IPC is disabled 00:04:18.595 EAL: No shared files mode enabled, IPC is disabled 00:04:18.595 EAL: TSC frequency is ~2500000 KHz 00:04:18.595 EAL: Main lcore 0 is ready (tid=7f707915fa40;cpuset=[0]) 00:04:18.595 EAL: Trying to obtain current memory policy. 00:04:18.595 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.595 EAL: Restoring previous memory policy: 0 00:04:18.595 EAL: request: mp_malloc_sync 00:04:18.595 EAL: No shared files mode enabled, IPC is disabled 00:04:18.595 EAL: Heap on socket 0 was expanded by 2MB 00:04:18.595 EAL: No shared files mode enabled, IPC is disabled 00:04:18.595 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:18.595 EAL: Mem event callback 'spdk:(nil)' registered 00:04:18.595 00:04:18.595 00:04:18.595 CUnit - A unit testing framework for C - Version 2.1-3 00:04:18.595 http://cunit.sourceforge.net/ 00:04:18.595 00:04:18.595 00:04:18.595 Suite: components_suite 00:04:19.163 Test: vtophys_malloc_test ...passed 00:04:19.163 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:19.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.163 EAL: Restoring previous memory policy: 4 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was expanded by 4MB 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was shrunk by 4MB 00:04:19.163 EAL: Trying to obtain current memory policy. 00:04:19.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.163 EAL: Restoring previous memory policy: 4 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was expanded by 6MB 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was shrunk by 6MB 00:04:19.163 EAL: Trying to obtain current memory policy. 00:04:19.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.163 EAL: Restoring previous memory policy: 4 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was expanded by 10MB 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was shrunk by 10MB 00:04:19.163 EAL: Trying to obtain current memory policy. 00:04:19.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.163 EAL: Restoring previous memory policy: 4 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was expanded by 18MB 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was shrunk by 18MB 00:04:19.163 EAL: Trying to obtain current memory policy. 00:04:19.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.163 EAL: Restoring previous memory policy: 4 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was expanded by 34MB 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was shrunk by 34MB 00:04:19.163 EAL: Trying to obtain current memory policy. 00:04:19.163 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.163 EAL: Restoring previous memory policy: 4 00:04:19.163 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.163 EAL: request: mp_malloc_sync 00:04:19.163 EAL: No shared files mode enabled, IPC is disabled 00:04:19.163 EAL: Heap on socket 0 was expanded by 66MB 00:04:19.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.422 EAL: request: mp_malloc_sync 00:04:19.422 EAL: No shared files mode enabled, IPC is disabled 00:04:19.422 EAL: Heap on socket 0 was shrunk by 66MB 00:04:19.422 EAL: Trying to obtain current memory policy. 00:04:19.422 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.422 EAL: Restoring previous memory policy: 4 00:04:19.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.422 EAL: request: mp_malloc_sync 00:04:19.423 EAL: No shared files mode enabled, IPC is disabled 00:04:19.423 EAL: Heap on socket 0 was expanded by 130MB 00:04:19.682 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.682 EAL: request: mp_malloc_sync 00:04:19.682 EAL: No shared files mode enabled, IPC is disabled 00:04:19.682 EAL: Heap on socket 0 was shrunk by 130MB 00:04:19.942 EAL: Trying to obtain current memory policy. 00:04:19.942 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:19.942 EAL: Restoring previous memory policy: 4 00:04:19.942 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.942 EAL: request: mp_malloc_sync 00:04:19.942 EAL: No shared files mode enabled, IPC is disabled 00:04:19.942 EAL: Heap on socket 0 was expanded by 258MB 00:04:20.209 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.210 EAL: request: mp_malloc_sync 00:04:20.210 EAL: No shared files mode enabled, IPC is disabled 00:04:20.210 EAL: Heap on socket 0 was shrunk by 258MB 00:04:20.781 EAL: Trying to obtain current memory policy. 00:04:20.781 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.781 EAL: Restoring previous memory policy: 4 00:04:20.781 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.781 EAL: request: mp_malloc_sync 00:04:20.781 EAL: No shared files mode enabled, IPC is disabled 00:04:20.781 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.801 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.801 EAL: request: mp_malloc_sync 00:04:21.801 EAL: No shared files mode enabled, IPC is disabled 00:04:21.801 EAL: Heap on socket 0 was shrunk by 514MB 00:04:22.369 EAL: Trying to obtain current memory policy. 00:04:22.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.628 EAL: Restoring previous memory policy: 4 00:04:22.628 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.628 EAL: request: mp_malloc_sync 00:04:22.628 EAL: No shared files mode enabled, IPC is disabled 00:04:22.628 EAL: Heap on socket 0 was expanded by 1026MB 00:04:24.007 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.265 EAL: request: mp_malloc_sync 00:04:24.265 EAL: No shared files mode enabled, IPC is disabled 00:04:24.265 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:26.168 passed 00:04:26.168 00:04:26.168 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.168 suites 1 1 n/a 0 0 00:04:26.168 tests 2 2 2 0 0 00:04:26.168 asserts 497 497 497 0 n/a 00:04:26.168 00:04:26.168 Elapsed time = 7.251 seconds 00:04:26.168 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.168 EAL: request: mp_malloc_sync 00:04:26.168 EAL: No shared files mode enabled, IPC is disabled 00:04:26.168 EAL: Heap on socket 0 was shrunk by 2MB 00:04:26.168 EAL: No shared files mode enabled, IPC is disabled 00:04:26.168 EAL: No shared files mode enabled, IPC is disabled 00:04:26.168 EAL: No shared files mode enabled, IPC is disabled 00:04:26.168 00:04:26.168 real 0m7.517s 00:04:26.168 user 0m6.650s 00:04:26.168 sys 0m0.817s 00:04:26.168 15:41:11 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.168 15:41:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:26.168 ************************************ 00:04:26.168 END TEST env_vtophys 00:04:26.168 ************************************ 00:04:26.168 15:41:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:26.168 15:41:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:26.168 15:41:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.168 15:41:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.168 ************************************ 00:04:26.168 START TEST env_pci 00:04:26.168 ************************************ 00:04:26.168 15:41:11 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:04:26.168 00:04:26.168 00:04:26.168 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.168 http://cunit.sourceforge.net/ 00:04:26.168 00:04:26.168 00:04:26.168 Suite: pci 00:04:26.168 Test: pci_hook ...[2024-11-17 15:41:11.524082] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3115485 has claimed it 00:04:26.168 EAL: Cannot find device (10000:00:01.0) 00:04:26.168 EAL: Failed to attach device on primary process 00:04:26.168 passed 00:04:26.168 00:04:26.168 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.168 suites 1 1 n/a 0 0 00:04:26.168 tests 1 1 1 0 0 00:04:26.168 asserts 25 25 25 0 n/a 00:04:26.168 00:04:26.168 Elapsed time = 0.047 seconds 00:04:26.168 00:04:26.168 real 0m0.124s 00:04:26.168 user 0m0.037s 00:04:26.168 sys 0m0.087s 00:04:26.168 15:41:11 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:26.168 15:41:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:26.168 ************************************ 00:04:26.168 END TEST env_pci 00:04:26.168 ************************************ 00:04:26.168 15:41:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:26.168 15:41:11 env -- env/env.sh@15 -- # uname 00:04:26.168 15:41:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:26.168 15:41:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:26.168 15:41:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.168 15:41:11 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:26.168 15:41:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.168 15:41:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.168 ************************************ 00:04:26.168 START TEST env_dpdk_post_init 00:04:26.168 ************************************ 00:04:26.168 15:41:11 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.428 EAL: Detected CPU lcores: 112 00:04:26.428 EAL: Detected NUMA nodes: 2 00:04:26.428 EAL: Detected shared linkage of DPDK 00:04:26.428 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.428 EAL: Selected IOVA mode 'VA' 00:04:26.428 EAL: VFIO support initialized 00:04:26.428 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.428 EAL: Using IOMMU type 1 (Type 1) 00:04:26.428 EAL: Ignore mapping IO port bar(1) 00:04:26.428 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:26.428 EAL: Ignore mapping IO port bar(1) 00:04:26.428 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:26.688 EAL: Ignore mapping IO port bar(1) 00:04:26.688 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:27.626 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:31.812 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:31.812 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:31.812 Starting DPDK initialization... 00:04:31.812 Starting SPDK post initialization... 00:04:31.812 SPDK NVMe probe 00:04:31.812 Attaching to 0000:d8:00.0 00:04:31.812 Attached to 0000:d8:00.0 00:04:31.812 Cleaning up... 00:04:31.812 00:04:31.812 real 0m5.490s 00:04:31.812 user 0m3.888s 00:04:31.812 sys 0m0.654s 00:04:31.812 15:41:17 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.812 15:41:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.812 ************************************ 00:04:31.812 END TEST env_dpdk_post_init 00:04:31.812 ************************************ 00:04:31.812 15:41:17 env -- env/env.sh@26 -- # uname 00:04:31.812 15:41:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:31.812 15:41:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.812 15:41:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.812 15:41:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.812 15:41:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.812 ************************************ 00:04:31.812 START TEST env_mem_callbacks 00:04:31.812 ************************************ 00:04:31.812 15:41:17 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:31.812 EAL: Detected CPU lcores: 112 00:04:31.812 EAL: Detected NUMA nodes: 2 00:04:31.812 EAL: Detected shared linkage of DPDK 00:04:31.812 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:32.071 EAL: Selected IOVA mode 'VA' 00:04:32.071 EAL: VFIO support initialized 00:04:32.071 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:32.071 00:04:32.071 00:04:32.071 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.071 http://cunit.sourceforge.net/ 00:04:32.071 00:04:32.071 00:04:32.071 Suite: memory 00:04:32.071 Test: test ... 00:04:32.071 register 0x200000200000 2097152 00:04:32.071 malloc 3145728 00:04:32.071 register 0x200000400000 4194304 00:04:32.071 buf 0x2000004fffc0 len 3145728 PASSED 00:04:32.071 malloc 64 00:04:32.071 buf 0x2000004ffec0 len 64 PASSED 00:04:32.071 malloc 4194304 00:04:32.071 register 0x200000800000 6291456 00:04:32.071 buf 0x2000009fffc0 len 4194304 PASSED 00:04:32.071 free 0x2000004fffc0 3145728 00:04:32.071 free 0x2000004ffec0 64 00:04:32.071 unregister 0x200000400000 4194304 PASSED 00:04:32.071 free 0x2000009fffc0 4194304 00:04:32.071 unregister 0x200000800000 6291456 PASSED 00:04:32.071 malloc 8388608 00:04:32.071 register 0x200000400000 10485760 00:04:32.071 buf 0x2000005fffc0 len 8388608 PASSED 00:04:32.071 free 0x2000005fffc0 8388608 00:04:32.071 unregister 0x200000400000 10485760 PASSED 00:04:32.071 passed 00:04:32.071 00:04:32.071 Run Summary: Type Total Ran Passed Failed Inactive 00:04:32.071 suites 1 1 n/a 0 0 00:04:32.071 tests 1 1 1 0 0 00:04:32.071 asserts 15 15 15 0 n/a 00:04:32.071 00:04:32.071 Elapsed time = 0.060 seconds 00:04:32.071 00:04:32.071 real 0m0.172s 00:04:32.071 user 0m0.092s 00:04:32.071 sys 0m0.079s 00:04:32.071 15:41:17 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.071 15:41:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:32.071 ************************************ 00:04:32.071 END TEST env_mem_callbacks 00:04:32.071 ************************************ 00:04:32.071 00:04:32.071 real 0m14.150s 00:04:32.071 user 0m11.133s 00:04:32.071 sys 0m2.065s 00:04:32.071 15:41:17 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.071 15:41:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.071 ************************************ 00:04:32.071 END TEST env 00:04:32.071 ************************************ 00:04:32.071 15:41:17 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:32.071 15:41:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.071 15:41:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.071 15:41:17 -- common/autotest_common.sh@10 -- # set +x 00:04:32.071 ************************************ 00:04:32.071 START TEST rpc 00:04:32.071 ************************************ 00:04:32.071 15:41:17 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:04:32.330 * Looking for test storage... 00:04:32.330 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.330 15:41:17 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.330 15:41:17 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.330 15:41:17 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.330 15:41:17 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.330 15:41:17 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.330 15:41:17 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.330 15:41:17 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.330 15:41:17 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.330 15:41:17 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.330 15:41:17 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.330 15:41:17 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.330 15:41:17 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:32.330 15:41:17 rpc -- scripts/common.sh@345 -- # : 1 00:04:32.330 15:41:17 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.330 15:41:17 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.330 15:41:17 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:32.330 15:41:17 rpc -- scripts/common.sh@353 -- # local d=1 00:04:32.330 15:41:17 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.330 15:41:17 rpc -- scripts/common.sh@355 -- # echo 1 00:04:32.330 15:41:17 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.330 15:41:17 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:32.330 15:41:17 rpc -- scripts/common.sh@353 -- # local d=2 00:04:32.330 15:41:17 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.330 15:41:17 rpc -- scripts/common.sh@355 -- # echo 2 00:04:32.330 15:41:17 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.330 15:41:17 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.330 15:41:17 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.330 15:41:17 rpc -- scripts/common.sh@368 -- # return 0 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.330 --rc genhtml_branch_coverage=1 00:04:32.330 --rc genhtml_function_coverage=1 00:04:32.330 --rc genhtml_legend=1 00:04:32.330 --rc geninfo_all_blocks=1 00:04:32.330 --rc geninfo_unexecuted_blocks=1 00:04:32.330 00:04:32.330 ' 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.330 --rc genhtml_branch_coverage=1 00:04:32.330 --rc genhtml_function_coverage=1 00:04:32.330 --rc genhtml_legend=1 00:04:32.330 --rc geninfo_all_blocks=1 00:04:32.330 --rc geninfo_unexecuted_blocks=1 00:04:32.330 00:04:32.330 ' 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.330 --rc genhtml_branch_coverage=1 00:04:32.330 --rc genhtml_function_coverage=1 00:04:32.330 --rc genhtml_legend=1 00:04:32.330 --rc geninfo_all_blocks=1 00:04:32.330 --rc geninfo_unexecuted_blocks=1 00:04:32.330 00:04:32.330 ' 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.330 --rc genhtml_branch_coverage=1 00:04:32.330 --rc genhtml_function_coverage=1 00:04:32.330 --rc genhtml_legend=1 00:04:32.330 --rc geninfo_all_blocks=1 00:04:32.330 --rc geninfo_unexecuted_blocks=1 00:04:32.330 00:04:32.330 ' 00:04:32.330 15:41:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3116744 00:04:32.330 15:41:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:32.330 15:41:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.330 15:41:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3116744 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@835 -- # '[' -z 3116744 ']' 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.330 15:41:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.330 [2024-11-17 15:41:17.862257] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:32.330 [2024-11-17 15:41:17.862351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116744 ] 00:04:32.589 [2024-11-17 15:41:17.994238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.589 [2024-11-17 15:41:18.087217] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:32.589 [2024-11-17 15:41:18.087264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3116744' to capture a snapshot of events at runtime. 00:04:32.589 [2024-11-17 15:41:18.087278] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:32.589 [2024-11-17 15:41:18.087289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:32.589 [2024-11-17 15:41:18.087305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3116744 for offline analysis/debug. 00:04:32.589 [2024-11-17 15:41:18.088596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.524 15:41:18 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.524 15:41:18 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:33.524 15:41:18 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:33.524 15:41:18 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:33.524 15:41:18 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:33.524 15:41:18 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:33.524 15:41:18 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.524 15:41:18 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.524 15:41:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.524 ************************************ 00:04:33.524 START TEST rpc_integrity 00:04:33.524 ************************************ 00:04:33.524 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:33.524 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:33.524 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.524 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.524 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.524 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:33.524 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:33.524 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:33.525 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:33.525 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.525 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.525 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.525 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:33.525 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.525 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.525 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.525 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.525 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.525 { 00:04:33.525 "name": "Malloc0", 00:04:33.525 "aliases": [ 00:04:33.525 "c7a23af7-f38e-481b-94bc-9c036f26ca24" 00:04:33.525 ], 00:04:33.525 "product_name": "Malloc disk", 00:04:33.525 "block_size": 512, 00:04:33.525 "num_blocks": 16384, 00:04:33.525 "uuid": "c7a23af7-f38e-481b-94bc-9c036f26ca24", 00:04:33.525 "assigned_rate_limits": { 00:04:33.525 "rw_ios_per_sec": 0, 00:04:33.525 "rw_mbytes_per_sec": 0, 00:04:33.525 "r_mbytes_per_sec": 0, 00:04:33.525 "w_mbytes_per_sec": 0 00:04:33.525 }, 00:04:33.525 "claimed": false, 00:04:33.525 "zoned": false, 00:04:33.525 "supported_io_types": { 00:04:33.525 "read": true, 00:04:33.525 "write": true, 00:04:33.525 "unmap": true, 00:04:33.525 "flush": true, 00:04:33.525 "reset": true, 00:04:33.525 "nvme_admin": false, 00:04:33.525 "nvme_io": false, 00:04:33.525 "nvme_io_md": false, 00:04:33.525 "write_zeroes": true, 00:04:33.525 "zcopy": true, 00:04:33.525 "get_zone_info": false, 00:04:33.525 "zone_management": false, 00:04:33.525 "zone_append": false, 00:04:33.525 "compare": false, 00:04:33.525 "compare_and_write": false, 00:04:33.525 "abort": true, 00:04:33.525 "seek_hole": false, 00:04:33.525 "seek_data": false, 00:04:33.525 "copy": true, 00:04:33.525 "nvme_iov_md": false 00:04:33.525 }, 00:04:33.525 "memory_domains": [ 00:04:33.525 { 00:04:33.525 "dma_device_id": "system", 00:04:33.525 "dma_device_type": 1 00:04:33.525 }, 00:04:33.525 { 00:04:33.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.525 "dma_device_type": 2 00:04:33.525 } 00:04:33.525 ], 00:04:33.525 "driver_specific": {} 00:04:33.525 } 00:04:33.525 ]' 00:04:33.525 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.525 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.525 15:41:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:33.525 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.525 15:41:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.525 [2024-11-17 15:41:18.997283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:33.525 [2024-11-17 15:41:18.997327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.525 [2024-11-17 15:41:18.997351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021680 00:04:33.525 [2024-11-17 15:41:18.997366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.525 [2024-11-17 15:41:18.999510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.525 [2024-11-17 15:41:18.999538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.525 Passthru0 00:04:33.525 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.525 15:41:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.525 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.525 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.525 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.525 15:41:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.525 { 00:04:33.525 "name": "Malloc0", 00:04:33.525 "aliases": [ 00:04:33.525 "c7a23af7-f38e-481b-94bc-9c036f26ca24" 00:04:33.525 ], 00:04:33.525 "product_name": "Malloc disk", 00:04:33.525 "block_size": 512, 00:04:33.525 "num_blocks": 16384, 00:04:33.525 "uuid": "c7a23af7-f38e-481b-94bc-9c036f26ca24", 00:04:33.525 "assigned_rate_limits": { 00:04:33.525 "rw_ios_per_sec": 0, 00:04:33.525 "rw_mbytes_per_sec": 0, 00:04:33.525 "r_mbytes_per_sec": 0, 00:04:33.525 "w_mbytes_per_sec": 0 00:04:33.525 }, 00:04:33.525 "claimed": true, 00:04:33.525 "claim_type": "exclusive_write", 00:04:33.525 "zoned": false, 00:04:33.525 "supported_io_types": { 00:04:33.525 "read": true, 00:04:33.525 "write": true, 00:04:33.525 "unmap": true, 00:04:33.525 "flush": true, 00:04:33.525 "reset": true, 00:04:33.525 "nvme_admin": false, 00:04:33.525 "nvme_io": false, 00:04:33.525 "nvme_io_md": false, 00:04:33.525 "write_zeroes": true, 00:04:33.525 "zcopy": true, 00:04:33.525 "get_zone_info": false, 00:04:33.525 "zone_management": false, 00:04:33.525 "zone_append": false, 00:04:33.525 "compare": false, 00:04:33.525 "compare_and_write": false, 00:04:33.525 "abort": true, 00:04:33.525 "seek_hole": false, 00:04:33.525 "seek_data": false, 00:04:33.525 "copy": true, 00:04:33.525 "nvme_iov_md": false 00:04:33.525 }, 00:04:33.525 "memory_domains": [ 00:04:33.525 { 00:04:33.525 "dma_device_id": "system", 00:04:33.525 "dma_device_type": 1 00:04:33.525 }, 00:04:33.525 { 00:04:33.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.525 "dma_device_type": 2 00:04:33.525 } 00:04:33.525 ], 00:04:33.525 "driver_specific": {} 00:04:33.525 }, 00:04:33.525 { 00:04:33.525 "name": "Passthru0", 00:04:33.525 "aliases": [ 00:04:33.525 "105871dd-b14f-52c0-b172-c28a422f5325" 00:04:33.525 ], 00:04:33.525 "product_name": "passthru", 00:04:33.525 "block_size": 512, 00:04:33.525 "num_blocks": 16384, 00:04:33.525 "uuid": "105871dd-b14f-52c0-b172-c28a422f5325", 00:04:33.525 "assigned_rate_limits": { 00:04:33.525 "rw_ios_per_sec": 0, 00:04:33.525 "rw_mbytes_per_sec": 0, 00:04:33.525 "r_mbytes_per_sec": 0, 00:04:33.525 "w_mbytes_per_sec": 0 00:04:33.525 }, 00:04:33.525 "claimed": false, 00:04:33.525 "zoned": false, 00:04:33.525 "supported_io_types": { 00:04:33.525 "read": true, 00:04:33.525 "write": true, 00:04:33.525 "unmap": true, 00:04:33.525 "flush": true, 00:04:33.525 "reset": true, 00:04:33.525 "nvme_admin": false, 00:04:33.525 "nvme_io": false, 00:04:33.525 "nvme_io_md": false, 00:04:33.525 "write_zeroes": true, 00:04:33.525 "zcopy": true, 00:04:33.525 "get_zone_info": false, 00:04:33.525 "zone_management": false, 00:04:33.525 "zone_append": false, 00:04:33.525 "compare": false, 00:04:33.525 "compare_and_write": false, 00:04:33.525 "abort": true, 00:04:33.525 "seek_hole": false, 00:04:33.525 "seek_data": false, 00:04:33.525 "copy": true, 00:04:33.525 "nvme_iov_md": false 00:04:33.525 }, 00:04:33.525 "memory_domains": [ 00:04:33.525 { 00:04:33.525 "dma_device_id": "system", 00:04:33.525 "dma_device_type": 1 00:04:33.525 }, 00:04:33.525 { 00:04:33.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.525 "dma_device_type": 2 00:04:33.525 } 00:04:33.525 ], 00:04:33.525 "driver_specific": { 00:04:33.525 "passthru": { 00:04:33.525 "name": "Passthru0", 00:04:33.525 "base_bdev_name": "Malloc0" 00:04:33.525 } 00:04:33.525 } 00:04:33.525 } 00:04:33.525 ]' 00:04:33.525 15:41:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.783 15:41:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.783 15:41:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.783 15:41:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.783 15:41:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.783 15:41:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.783 15:41:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.783 15:41:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.783 00:04:33.783 real 0m0.327s 00:04:33.783 user 0m0.176s 00:04:33.783 sys 0m0.055s 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.783 15:41:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.783 ************************************ 00:04:33.783 END TEST rpc_integrity 00:04:33.783 ************************************ 00:04:33.783 15:41:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:33.783 15:41:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.783 15:41:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.783 15:41:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.783 ************************************ 00:04:33.783 START TEST rpc_plugins 00:04:33.783 ************************************ 00:04:33.783 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:33.783 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:33.783 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.783 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.783 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.783 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:33.783 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:33.783 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.783 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:33.783 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.783 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:33.783 { 00:04:33.783 "name": "Malloc1", 00:04:33.783 "aliases": [ 00:04:33.783 "372deb1f-678c-4a7e-964f-0c4808d9b482" 00:04:33.783 ], 00:04:33.783 "product_name": "Malloc disk", 00:04:33.783 "block_size": 4096, 00:04:33.783 "num_blocks": 256, 00:04:33.783 "uuid": "372deb1f-678c-4a7e-964f-0c4808d9b482", 00:04:33.783 "assigned_rate_limits": { 00:04:33.783 "rw_ios_per_sec": 0, 00:04:33.783 "rw_mbytes_per_sec": 0, 00:04:33.783 "r_mbytes_per_sec": 0, 00:04:33.783 "w_mbytes_per_sec": 0 00:04:33.783 }, 00:04:33.783 "claimed": false, 00:04:33.783 "zoned": false, 00:04:33.783 "supported_io_types": { 00:04:33.783 "read": true, 00:04:33.783 "write": true, 00:04:33.783 "unmap": true, 00:04:33.783 "flush": true, 00:04:33.783 "reset": true, 00:04:33.783 "nvme_admin": false, 00:04:33.783 "nvme_io": false, 00:04:33.783 "nvme_io_md": false, 00:04:33.783 "write_zeroes": true, 00:04:33.783 "zcopy": true, 00:04:33.783 "get_zone_info": false, 00:04:33.783 "zone_management": false, 00:04:33.783 "zone_append": false, 00:04:33.783 "compare": false, 00:04:33.783 "compare_and_write": false, 00:04:33.783 "abort": true, 00:04:33.783 "seek_hole": false, 00:04:33.783 "seek_data": false, 00:04:33.783 "copy": true, 00:04:33.783 "nvme_iov_md": false 00:04:33.783 }, 00:04:33.783 "memory_domains": [ 00:04:33.783 { 00:04:33.783 "dma_device_id": "system", 00:04:33.783 "dma_device_type": 1 00:04:33.783 }, 00:04:33.783 { 00:04:33.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.783 "dma_device_type": 2 00:04:33.783 } 00:04:33.783 ], 00:04:33.783 "driver_specific": {} 00:04:33.783 } 00:04:33.783 ]' 00:04:33.783 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.040 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.040 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.040 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.040 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.040 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.041 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.041 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.041 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.041 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.041 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.041 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:34.041 15:41:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.041 00:04:34.041 real 0m0.157s 00:04:34.041 user 0m0.086s 00:04:34.041 sys 0m0.031s 00:04:34.041 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.041 15:41:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.041 ************************************ 00:04:34.041 END TEST rpc_plugins 00:04:34.041 ************************************ 00:04:34.041 15:41:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.041 15:41:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.041 15:41:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.041 15:41:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.041 ************************************ 00:04:34.041 START TEST rpc_trace_cmd_test 00:04:34.041 ************************************ 00:04:34.041 15:41:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:34.041 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:34.041 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.041 15:41:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.041 15:41:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.041 15:41:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.041 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.041 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3116744", 00:04:34.041 "tpoint_group_mask": "0x8", 00:04:34.041 "iscsi_conn": { 00:04:34.041 "mask": "0x2", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "scsi": { 00:04:34.041 "mask": "0x4", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "bdev": { 00:04:34.041 "mask": "0x8", 00:04:34.041 "tpoint_mask": "0xffffffffffffffff" 00:04:34.041 }, 00:04:34.041 "nvmf_rdma": { 00:04:34.041 "mask": "0x10", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "nvmf_tcp": { 00:04:34.041 "mask": "0x20", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "ftl": { 00:04:34.041 "mask": "0x40", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "blobfs": { 00:04:34.041 "mask": "0x80", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "dsa": { 00:04:34.041 "mask": "0x200", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "thread": { 00:04:34.041 "mask": "0x400", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "nvme_pcie": { 00:04:34.041 "mask": "0x800", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "iaa": { 00:04:34.041 "mask": "0x1000", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "nvme_tcp": { 00:04:34.041 "mask": "0x2000", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "bdev_nvme": { 00:04:34.041 "mask": "0x4000", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "sock": { 00:04:34.041 "mask": "0x8000", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "blob": { 00:04:34.041 "mask": "0x10000", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "bdev_raid": { 00:04:34.041 "mask": "0x20000", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 }, 00:04:34.041 "scheduler": { 00:04:34.041 "mask": "0x40000", 00:04:34.041 "tpoint_mask": "0x0" 00:04:34.041 } 00:04:34.041 }' 00:04:34.041 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.041 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:34.041 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:34.299 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:34.299 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:34.299 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:34.300 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:34.300 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:34.300 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:34.300 15:41:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:34.300 00:04:34.300 real 0m0.222s 00:04:34.300 user 0m0.181s 00:04:34.300 sys 0m0.033s 00:04:34.300 15:41:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.300 15:41:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.300 ************************************ 00:04:34.300 END TEST rpc_trace_cmd_test 00:04:34.300 ************************************ 00:04:34.300 15:41:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:34.300 15:41:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:34.300 15:41:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:34.300 15:41:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.300 15:41:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.300 15:41:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.300 ************************************ 00:04:34.300 START TEST rpc_daemon_integrity 00:04:34.300 ************************************ 00:04:34.300 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:34.300 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.300 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.300 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.300 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.300 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.300 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.558 { 00:04:34.558 "name": "Malloc2", 00:04:34.558 "aliases": [ 00:04:34.558 "7964b481-c2ab-49e4-a1fb-5f10dec5997f" 00:04:34.558 ], 00:04:34.558 "product_name": "Malloc disk", 00:04:34.558 "block_size": 512, 00:04:34.558 "num_blocks": 16384, 00:04:34.558 "uuid": "7964b481-c2ab-49e4-a1fb-5f10dec5997f", 00:04:34.558 "assigned_rate_limits": { 00:04:34.558 "rw_ios_per_sec": 0, 00:04:34.558 "rw_mbytes_per_sec": 0, 00:04:34.558 "r_mbytes_per_sec": 0, 00:04:34.558 "w_mbytes_per_sec": 0 00:04:34.558 }, 00:04:34.558 "claimed": false, 00:04:34.558 "zoned": false, 00:04:34.558 "supported_io_types": { 00:04:34.558 "read": true, 00:04:34.558 "write": true, 00:04:34.558 "unmap": true, 00:04:34.558 "flush": true, 00:04:34.558 "reset": true, 00:04:34.558 "nvme_admin": false, 00:04:34.558 "nvme_io": false, 00:04:34.558 "nvme_io_md": false, 00:04:34.558 "write_zeroes": true, 00:04:34.558 "zcopy": true, 00:04:34.558 "get_zone_info": false, 00:04:34.558 "zone_management": false, 00:04:34.558 "zone_append": false, 00:04:34.558 "compare": false, 00:04:34.558 "compare_and_write": false, 00:04:34.558 "abort": true, 00:04:34.558 "seek_hole": false, 00:04:34.558 "seek_data": false, 00:04:34.558 "copy": true, 00:04:34.558 "nvme_iov_md": false 00:04:34.558 }, 00:04:34.558 "memory_domains": [ 00:04:34.558 { 00:04:34.558 "dma_device_id": "system", 00:04:34.558 "dma_device_type": 1 00:04:34.558 }, 00:04:34.558 { 00:04:34.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.558 "dma_device_type": 2 00:04:34.558 } 00:04:34.558 ], 00:04:34.558 "driver_specific": {} 00:04:34.558 } 00:04:34.558 ]' 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.558 [2024-11-17 15:41:19.940220] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:34.558 [2024-11-17 15:41:19.940258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.558 [2024-11-17 15:41:19.940282] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000022880 00:04:34.558 [2024-11-17 15:41:19.940294] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.558 [2024-11-17 15:41:19.942362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.558 [2024-11-17 15:41:19.942389] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.558 Passthru0 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.558 { 00:04:34.558 "name": "Malloc2", 00:04:34.558 "aliases": [ 00:04:34.558 "7964b481-c2ab-49e4-a1fb-5f10dec5997f" 00:04:34.558 ], 00:04:34.558 "product_name": "Malloc disk", 00:04:34.558 "block_size": 512, 00:04:34.558 "num_blocks": 16384, 00:04:34.558 "uuid": "7964b481-c2ab-49e4-a1fb-5f10dec5997f", 00:04:34.558 "assigned_rate_limits": { 00:04:34.558 "rw_ios_per_sec": 0, 00:04:34.558 "rw_mbytes_per_sec": 0, 00:04:34.558 "r_mbytes_per_sec": 0, 00:04:34.558 "w_mbytes_per_sec": 0 00:04:34.558 }, 00:04:34.558 "claimed": true, 00:04:34.558 "claim_type": "exclusive_write", 00:04:34.558 "zoned": false, 00:04:34.558 "supported_io_types": { 00:04:34.558 "read": true, 00:04:34.558 "write": true, 00:04:34.558 "unmap": true, 00:04:34.558 "flush": true, 00:04:34.558 "reset": true, 00:04:34.558 "nvme_admin": false, 00:04:34.558 "nvme_io": false, 00:04:34.558 "nvme_io_md": false, 00:04:34.558 "write_zeroes": true, 00:04:34.558 "zcopy": true, 00:04:34.558 "get_zone_info": false, 00:04:34.558 "zone_management": false, 00:04:34.558 "zone_append": false, 00:04:34.558 "compare": false, 00:04:34.558 "compare_and_write": false, 00:04:34.558 "abort": true, 00:04:34.558 "seek_hole": false, 00:04:34.558 "seek_data": false, 00:04:34.558 "copy": true, 00:04:34.558 "nvme_iov_md": false 00:04:34.558 }, 00:04:34.558 "memory_domains": [ 00:04:34.558 { 00:04:34.558 "dma_device_id": "system", 00:04:34.558 "dma_device_type": 1 00:04:34.558 }, 00:04:34.558 { 00:04:34.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.558 "dma_device_type": 2 00:04:34.558 } 00:04:34.558 ], 00:04:34.558 "driver_specific": {} 00:04:34.558 }, 00:04:34.558 { 00:04:34.558 "name": "Passthru0", 00:04:34.558 "aliases": [ 00:04:34.558 "fa9d7e3e-e58f-5f7d-8b9f-162a388d5fd8" 00:04:34.558 ], 00:04:34.558 "product_name": "passthru", 00:04:34.558 "block_size": 512, 00:04:34.558 "num_blocks": 16384, 00:04:34.558 "uuid": "fa9d7e3e-e58f-5f7d-8b9f-162a388d5fd8", 00:04:34.558 "assigned_rate_limits": { 00:04:34.558 "rw_ios_per_sec": 0, 00:04:34.558 "rw_mbytes_per_sec": 0, 00:04:34.558 "r_mbytes_per_sec": 0, 00:04:34.558 "w_mbytes_per_sec": 0 00:04:34.558 }, 00:04:34.558 "claimed": false, 00:04:34.558 "zoned": false, 00:04:34.558 "supported_io_types": { 00:04:34.558 "read": true, 00:04:34.558 "write": true, 00:04:34.558 "unmap": true, 00:04:34.558 "flush": true, 00:04:34.558 "reset": true, 00:04:34.558 "nvme_admin": false, 00:04:34.558 "nvme_io": false, 00:04:34.558 "nvme_io_md": false, 00:04:34.558 "write_zeroes": true, 00:04:34.558 "zcopy": true, 00:04:34.558 "get_zone_info": false, 00:04:34.558 "zone_management": false, 00:04:34.558 "zone_append": false, 00:04:34.558 "compare": false, 00:04:34.558 "compare_and_write": false, 00:04:34.558 "abort": true, 00:04:34.558 "seek_hole": false, 00:04:34.558 "seek_data": false, 00:04:34.558 "copy": true, 00:04:34.558 "nvme_iov_md": false 00:04:34.558 }, 00:04:34.558 "memory_domains": [ 00:04:34.558 { 00:04:34.558 "dma_device_id": "system", 00:04:34.558 "dma_device_type": 1 00:04:34.558 }, 00:04:34.558 { 00:04:34.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.558 "dma_device_type": 2 00:04:34.558 } 00:04:34.558 ], 00:04:34.558 "driver_specific": { 00:04:34.558 "passthru": { 00:04:34.558 "name": "Passthru0", 00:04:34.558 "base_bdev_name": "Malloc2" 00:04:34.558 } 00:04:34.558 } 00:04:34.558 } 00:04:34.558 ]' 00:04:34.558 15:41:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.558 15:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.816 15:41:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.816 00:04:34.816 real 0m0.331s 00:04:34.816 user 0m0.176s 00:04:34.816 sys 0m0.059s 00:04:34.816 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.816 15:41:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.816 ************************************ 00:04:34.816 END TEST rpc_daemon_integrity 00:04:34.816 ************************************ 00:04:34.817 15:41:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:34.817 15:41:20 rpc -- rpc/rpc.sh@84 -- # killprocess 3116744 00:04:34.817 15:41:20 rpc -- common/autotest_common.sh@954 -- # '[' -z 3116744 ']' 00:04:34.817 15:41:20 rpc -- common/autotest_common.sh@958 -- # kill -0 3116744 00:04:34.817 15:41:20 rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.817 15:41:20 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.817 15:41:20 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3116744 00:04:34.817 15:41:20 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.817 15:41:20 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.817 15:41:20 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3116744' 00:04:34.817 killing process with pid 3116744 00:04:34.817 15:41:20 rpc -- common/autotest_common.sh@973 -- # kill 3116744 00:04:34.817 15:41:20 rpc -- common/autotest_common.sh@978 -- # wait 3116744 00:04:37.348 00:04:37.348 real 0m4.837s 00:04:37.348 user 0m5.413s 00:04:37.348 sys 0m0.986s 00:04:37.348 15:41:22 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.348 15:41:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.348 ************************************ 00:04:37.348 END TEST rpc 00:04:37.348 ************************************ 00:04:37.348 15:41:22 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:37.348 15:41:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.348 15:41:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.348 15:41:22 -- common/autotest_common.sh@10 -- # set +x 00:04:37.348 ************************************ 00:04:37.348 START TEST skip_rpc 00:04:37.348 ************************************ 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:37.348 * Looking for test storage... 00:04:37.348 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.348 15:41:22 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.348 --rc genhtml_branch_coverage=1 00:04:37.348 --rc genhtml_function_coverage=1 00:04:37.348 --rc genhtml_legend=1 00:04:37.348 --rc geninfo_all_blocks=1 00:04:37.348 --rc geninfo_unexecuted_blocks=1 00:04:37.348 00:04:37.348 ' 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.348 --rc genhtml_branch_coverage=1 00:04:37.348 --rc genhtml_function_coverage=1 00:04:37.348 --rc genhtml_legend=1 00:04:37.348 --rc geninfo_all_blocks=1 00:04:37.348 --rc geninfo_unexecuted_blocks=1 00:04:37.348 00:04:37.348 ' 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.348 --rc genhtml_branch_coverage=1 00:04:37.348 --rc genhtml_function_coverage=1 00:04:37.348 --rc genhtml_legend=1 00:04:37.348 --rc geninfo_all_blocks=1 00:04:37.348 --rc geninfo_unexecuted_blocks=1 00:04:37.348 00:04:37.348 ' 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.348 --rc genhtml_branch_coverage=1 00:04:37.348 --rc genhtml_function_coverage=1 00:04:37.348 --rc genhtml_legend=1 00:04:37.348 --rc geninfo_all_blocks=1 00:04:37.348 --rc geninfo_unexecuted_blocks=1 00:04:37.348 00:04:37.348 ' 00:04:37.348 15:41:22 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:37.348 15:41:22 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:37.348 15:41:22 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.348 15:41:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.348 ************************************ 00:04:37.348 START TEST skip_rpc 00:04:37.348 ************************************ 00:04:37.348 15:41:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:37.348 15:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3117787 00:04:37.348 15:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.348 15:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:37.348 15:41:22 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:37.348 [2024-11-17 15:41:22.814895] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:37.348 [2024-11-17 15:41:22.814977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117787 ] 00:04:37.607 [2024-11-17 15:41:22.944344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.607 [2024-11-17 15:41:23.039292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:42.876 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3117787 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3117787 ']' 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3117787 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3117787 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3117787' 00:04:42.877 killing process with pid 3117787 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3117787 00:04:42.877 15:41:27 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3117787 00:04:44.780 00:04:44.780 real 0m7.250s 00:04:44.780 user 0m6.854s 00:04:44.780 sys 0m0.443s 00:04:44.780 15:41:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.780 15:41:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.780 ************************************ 00:04:44.780 END TEST skip_rpc 00:04:44.780 ************************************ 00:04:44.780 15:41:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:44.780 15:41:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.780 15:41:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.780 15:41:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.780 ************************************ 00:04:44.780 START TEST skip_rpc_with_json 00:04:44.780 ************************************ 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3119104 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3119104 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3119104 ']' 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.780 15:41:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.780 [2024-11-17 15:41:30.145709] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:44.780 [2024-11-17 15:41:30.145803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3119104 ] 00:04:44.780 [2024-11-17 15:41:30.278319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.039 [2024-11-17 15:41:30.378039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.606 [2024-11-17 15:41:31.105484] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:45.606 request: 00:04:45.606 { 00:04:45.606 "trtype": "tcp", 00:04:45.606 "method": "nvmf_get_transports", 00:04:45.606 "req_id": 1 00:04:45.606 } 00:04:45.606 Got JSON-RPC error response 00:04:45.606 response: 00:04:45.606 { 00:04:45.606 "code": -19, 00:04:45.606 "message": "No such device" 00:04:45.606 } 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.606 [2024-11-17 15:41:31.117603] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.606 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.864 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.864 15:41:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:45.864 { 00:04:45.864 "subsystems": [ 00:04:45.864 { 00:04:45.864 "subsystem": "fsdev", 00:04:45.864 "config": [ 00:04:45.864 { 00:04:45.864 "method": "fsdev_set_opts", 00:04:45.864 "params": { 00:04:45.864 "fsdev_io_pool_size": 65535, 00:04:45.864 "fsdev_io_cache_size": 256 00:04:45.864 } 00:04:45.864 } 00:04:45.864 ] 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "subsystem": "keyring", 00:04:45.864 "config": [] 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "subsystem": "iobuf", 00:04:45.864 "config": [ 00:04:45.864 { 00:04:45.864 "method": "iobuf_set_options", 00:04:45.864 "params": { 00:04:45.864 "small_pool_count": 8192, 00:04:45.864 "large_pool_count": 1024, 00:04:45.864 "small_bufsize": 8192, 00:04:45.864 "large_bufsize": 135168, 00:04:45.864 "enable_numa": false 00:04:45.864 } 00:04:45.864 } 00:04:45.864 ] 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "subsystem": "sock", 00:04:45.864 "config": [ 00:04:45.864 { 00:04:45.864 "method": "sock_set_default_impl", 00:04:45.864 "params": { 00:04:45.864 "impl_name": "posix" 00:04:45.864 } 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "method": "sock_impl_set_options", 00:04:45.864 "params": { 00:04:45.864 "impl_name": "ssl", 00:04:45.864 "recv_buf_size": 4096, 00:04:45.864 "send_buf_size": 4096, 00:04:45.864 "enable_recv_pipe": true, 00:04:45.864 "enable_quickack": false, 00:04:45.864 "enable_placement_id": 0, 00:04:45.864 "enable_zerocopy_send_server": true, 00:04:45.864 "enable_zerocopy_send_client": false, 00:04:45.864 "zerocopy_threshold": 0, 00:04:45.864 "tls_version": 0, 00:04:45.864 "enable_ktls": false 00:04:45.864 } 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "method": "sock_impl_set_options", 00:04:45.864 "params": { 00:04:45.864 "impl_name": "posix", 00:04:45.864 "recv_buf_size": 2097152, 00:04:45.864 "send_buf_size": 2097152, 00:04:45.864 "enable_recv_pipe": true, 00:04:45.864 "enable_quickack": false, 00:04:45.864 "enable_placement_id": 0, 00:04:45.864 "enable_zerocopy_send_server": true, 00:04:45.864 "enable_zerocopy_send_client": false, 00:04:45.864 "zerocopy_threshold": 0, 00:04:45.864 "tls_version": 0, 00:04:45.864 "enable_ktls": false 00:04:45.864 } 00:04:45.864 } 00:04:45.864 ] 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "subsystem": "vmd", 00:04:45.864 "config": [] 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "subsystem": "accel", 00:04:45.864 "config": [ 00:04:45.864 { 00:04:45.864 "method": "accel_set_options", 00:04:45.864 "params": { 00:04:45.864 "small_cache_size": 128, 00:04:45.864 "large_cache_size": 16, 00:04:45.864 "task_count": 2048, 00:04:45.864 "sequence_count": 2048, 00:04:45.864 "buf_count": 2048 00:04:45.864 } 00:04:45.864 } 00:04:45.864 ] 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "subsystem": "bdev", 00:04:45.864 "config": [ 00:04:45.864 { 00:04:45.864 "method": "bdev_set_options", 00:04:45.864 "params": { 00:04:45.864 "bdev_io_pool_size": 65535, 00:04:45.864 "bdev_io_cache_size": 256, 00:04:45.864 "bdev_auto_examine": true, 00:04:45.864 "iobuf_small_cache_size": 128, 00:04:45.864 "iobuf_large_cache_size": 16 00:04:45.864 } 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "method": "bdev_raid_set_options", 00:04:45.864 "params": { 00:04:45.864 "process_window_size_kb": 1024, 00:04:45.864 "process_max_bandwidth_mb_sec": 0 00:04:45.864 } 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "method": "bdev_iscsi_set_options", 00:04:45.864 "params": { 00:04:45.864 "timeout_sec": 30 00:04:45.864 } 00:04:45.864 }, 00:04:45.864 { 00:04:45.864 "method": "bdev_nvme_set_options", 00:04:45.864 "params": { 00:04:45.864 "action_on_timeout": "none", 00:04:45.864 "timeout_us": 0, 00:04:45.864 "timeout_admin_us": 0, 00:04:45.864 "keep_alive_timeout_ms": 10000, 00:04:45.864 "arbitration_burst": 0, 00:04:45.864 "low_priority_weight": 0, 00:04:45.864 "medium_priority_weight": 0, 00:04:45.864 "high_priority_weight": 0, 00:04:45.864 "nvme_adminq_poll_period_us": 10000, 00:04:45.864 "nvme_ioq_poll_period_us": 0, 00:04:45.864 "io_queue_requests": 0, 00:04:45.864 "delay_cmd_submit": true, 00:04:45.864 "transport_retry_count": 4, 00:04:45.864 "bdev_retry_count": 3, 00:04:45.864 "transport_ack_timeout": 0, 00:04:45.865 "ctrlr_loss_timeout_sec": 0, 00:04:45.865 "reconnect_delay_sec": 0, 00:04:45.865 "fast_io_fail_timeout_sec": 0, 00:04:45.865 "disable_auto_failback": false, 00:04:45.865 "generate_uuids": false, 00:04:45.865 "transport_tos": 0, 00:04:45.865 "nvme_error_stat": false, 00:04:45.865 "rdma_srq_size": 0, 00:04:45.865 "io_path_stat": false, 00:04:45.865 "allow_accel_sequence": false, 00:04:45.865 "rdma_max_cq_size": 0, 00:04:45.865 "rdma_cm_event_timeout_ms": 0, 00:04:45.865 "dhchap_digests": [ 00:04:45.865 "sha256", 00:04:45.865 "sha384", 00:04:45.865 "sha512" 00:04:45.865 ], 00:04:45.865 "dhchap_dhgroups": [ 00:04:45.865 "null", 00:04:45.865 "ffdhe2048", 00:04:45.865 "ffdhe3072", 00:04:45.865 "ffdhe4096", 00:04:45.865 "ffdhe6144", 00:04:45.865 "ffdhe8192" 00:04:45.865 ] 00:04:45.865 } 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "method": "bdev_nvme_set_hotplug", 00:04:45.865 "params": { 00:04:45.865 "period_us": 100000, 00:04:45.865 "enable": false 00:04:45.865 } 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "method": "bdev_wait_for_examine" 00:04:45.865 } 00:04:45.865 ] 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "subsystem": "scsi", 00:04:45.865 "config": null 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "subsystem": "scheduler", 00:04:45.865 "config": [ 00:04:45.865 { 00:04:45.865 "method": "framework_set_scheduler", 00:04:45.865 "params": { 00:04:45.865 "name": "static" 00:04:45.865 } 00:04:45.865 } 00:04:45.865 ] 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "subsystem": "vhost_scsi", 00:04:45.865 "config": [] 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "subsystem": "vhost_blk", 00:04:45.865 "config": [] 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "subsystem": "ublk", 00:04:45.865 "config": [] 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "subsystem": "nbd", 00:04:45.865 "config": [] 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "subsystem": "nvmf", 00:04:45.865 "config": [ 00:04:45.865 { 00:04:45.865 "method": "nvmf_set_config", 00:04:45.865 "params": { 00:04:45.865 "discovery_filter": "match_any", 00:04:45.865 "admin_cmd_passthru": { 00:04:45.865 "identify_ctrlr": false 00:04:45.865 }, 00:04:45.865 "dhchap_digests": [ 00:04:45.865 "sha256", 00:04:45.865 "sha384", 00:04:45.865 "sha512" 00:04:45.865 ], 00:04:45.865 "dhchap_dhgroups": [ 00:04:45.865 "null", 00:04:45.865 "ffdhe2048", 00:04:45.865 "ffdhe3072", 00:04:45.865 "ffdhe4096", 00:04:45.865 "ffdhe6144", 00:04:45.865 "ffdhe8192" 00:04:45.865 ] 00:04:45.865 } 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "method": "nvmf_set_max_subsystems", 00:04:45.865 "params": { 00:04:45.865 "max_subsystems": 1024 00:04:45.865 } 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "method": "nvmf_set_crdt", 00:04:45.865 "params": { 00:04:45.865 "crdt1": 0, 00:04:45.865 "crdt2": 0, 00:04:45.865 "crdt3": 0 00:04:45.865 } 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "method": "nvmf_create_transport", 00:04:45.865 "params": { 00:04:45.865 "trtype": "TCP", 00:04:45.865 "max_queue_depth": 128, 00:04:45.865 "max_io_qpairs_per_ctrlr": 127, 00:04:45.865 "in_capsule_data_size": 4096, 00:04:45.865 "max_io_size": 131072, 00:04:45.865 "io_unit_size": 131072, 00:04:45.865 "max_aq_depth": 128, 00:04:45.865 "num_shared_buffers": 511, 00:04:45.865 "buf_cache_size": 4294967295, 00:04:45.865 "dif_insert_or_strip": false, 00:04:45.865 "zcopy": false, 00:04:45.865 "c2h_success": true, 00:04:45.865 "sock_priority": 0, 00:04:45.865 "abort_timeout_sec": 1, 00:04:45.865 "ack_timeout": 0, 00:04:45.865 "data_wr_pool_size": 0 00:04:45.865 } 00:04:45.865 } 00:04:45.865 ] 00:04:45.865 }, 00:04:45.865 { 00:04:45.865 "subsystem": "iscsi", 00:04:45.865 "config": [ 00:04:45.865 { 00:04:45.865 "method": "iscsi_set_options", 00:04:45.865 "params": { 00:04:45.865 "node_base": "iqn.2016-06.io.spdk", 00:04:45.865 "max_sessions": 128, 00:04:45.865 "max_connections_per_session": 2, 00:04:45.865 "max_queue_depth": 64, 00:04:45.865 "default_time2wait": 2, 00:04:45.865 "default_time2retain": 20, 00:04:45.865 "first_burst_length": 8192, 00:04:45.865 "immediate_data": true, 00:04:45.865 "allow_duplicated_isid": false, 00:04:45.865 "error_recovery_level": 0, 00:04:45.865 "nop_timeout": 60, 00:04:45.865 "nop_in_interval": 30, 00:04:45.865 "disable_chap": false, 00:04:45.865 "require_chap": false, 00:04:45.865 "mutual_chap": false, 00:04:45.865 "chap_group": 0, 00:04:45.865 "max_large_datain_per_connection": 64, 00:04:45.865 "max_r2t_per_connection": 4, 00:04:45.865 "pdu_pool_size": 36864, 00:04:45.865 "immediate_data_pool_size": 16384, 00:04:45.865 "data_out_pool_size": 2048 00:04:45.865 } 00:04:45.865 } 00:04:45.865 ] 00:04:45.865 } 00:04:45.865 ] 00:04:45.865 } 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3119104 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3119104 ']' 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3119104 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119104 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119104' 00:04:45.865 killing process with pid 3119104 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3119104 00:04:45.865 15:41:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3119104 00:04:48.397 15:41:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3119694 00:04:48.397 15:41:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:48.397 15:41:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3119694 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3119694 ']' 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3119694 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119694 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119694' 00:04:53.666 killing process with pid 3119694 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3119694 00:04:53.666 15:41:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3119694 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:04:55.569 00:04:55.569 real 0m10.757s 00:04:55.569 user 0m10.250s 00:04:55.569 sys 0m0.978s 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.569 ************************************ 00:04:55.569 END TEST skip_rpc_with_json 00:04:55.569 ************************************ 00:04:55.569 15:41:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:55.569 15:41:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.569 15:41:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.569 15:41:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.569 ************************************ 00:04:55.569 START TEST skip_rpc_with_delay 00:04:55.569 ************************************ 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:55.569 15:41:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.569 [2024-11-17 15:41:40.978746] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:55.569 15:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:55.569 15:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.569 15:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:55.569 15:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.569 00:04:55.569 real 0m0.156s 00:04:55.569 user 0m0.079s 00:04:55.569 sys 0m0.075s 00:04:55.569 15:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.569 15:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:55.569 ************************************ 00:04:55.569 END TEST skip_rpc_with_delay 00:04:55.569 ************************************ 00:04:55.569 15:41:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:55.569 15:41:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:55.569 15:41:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:55.569 15:41:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.569 15:41:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.569 15:41:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.828 ************************************ 00:04:55.828 START TEST exit_on_failed_rpc_init 00:04:55.828 ************************************ 00:04:55.828 15:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:55.828 15:41:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3121080 00:04:55.828 15:41:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3121080 00:04:55.828 15:41:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.828 15:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3121080 ']' 00:04:55.828 15:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.828 15:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.828 15:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.828 15:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.828 15:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.828 [2024-11-17 15:41:41.216555] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:55.828 [2024-11-17 15:41:41.216668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121080 ] 00:04:55.828 [2024-11-17 15:41:41.348042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.086 [2024-11-17 15:41:41.442715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:56.654 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.942 [2024-11-17 15:41:42.262124] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:56.942 [2024-11-17 15:41:42.262210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121247 ] 00:04:56.942 [2024-11-17 15:41:42.392675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.210 [2024-11-17 15:41:42.494196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.210 [2024-11-17 15:41:42.494277] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:57.210 [2024-11-17 15:41:42.494300] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:57.210 [2024-11-17 15:41:42.494312] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3121080 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3121080 ']' 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3121080 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.210 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3121080 00:04:57.468 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.468 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.468 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3121080' 00:04:57.468 killing process with pid 3121080 00:04:57.468 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3121080 00:04:57.468 15:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3121080 00:04:59.424 00:04:59.424 real 0m3.833s 00:04:59.424 user 0m4.105s 00:04:59.424 sys 0m0.694s 00:04:59.424 15:41:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.424 15:41:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.424 ************************************ 00:04:59.424 END TEST exit_on_failed_rpc_init 00:04:59.424 ************************************ 00:04:59.682 15:41:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:04:59.682 00:04:59.682 real 0m22.506s 00:04:59.682 user 0m21.497s 00:04:59.682 sys 0m2.534s 00:04:59.682 15:41:44 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.682 15:41:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.682 ************************************ 00:04:59.682 END TEST skip_rpc 00:04:59.682 ************************************ 00:04:59.682 15:41:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.682 15:41:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.682 15:41:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.682 15:41:45 -- common/autotest_common.sh@10 -- # set +x 00:04:59.682 ************************************ 00:04:59.682 START TEST rpc_client 00:04:59.682 ************************************ 00:04:59.682 15:41:45 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.682 * Looking for test storage... 00:04:59.682 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:04:59.682 15:41:45 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.683 15:41:45 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:59.683 15:41:45 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.943 15:41:45 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.943 15:41:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:59.943 15:41:45 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.943 15:41:45 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:59.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.943 --rc genhtml_branch_coverage=1 00:04:59.943 --rc genhtml_function_coverage=1 00:04:59.943 --rc genhtml_legend=1 00:04:59.943 --rc geninfo_all_blocks=1 00:04:59.943 --rc geninfo_unexecuted_blocks=1 00:04:59.943 00:04:59.943 ' 00:04:59.943 15:41:45 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:59.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.943 --rc genhtml_branch_coverage=1 00:04:59.943 --rc genhtml_function_coverage=1 00:04:59.943 --rc genhtml_legend=1 00:04:59.943 --rc geninfo_all_blocks=1 00:04:59.943 --rc geninfo_unexecuted_blocks=1 00:04:59.943 00:04:59.943 ' 00:04:59.943 15:41:45 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:59.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.943 --rc genhtml_branch_coverage=1 00:04:59.943 --rc genhtml_function_coverage=1 00:04:59.943 --rc genhtml_legend=1 00:04:59.943 --rc geninfo_all_blocks=1 00:04:59.943 --rc geninfo_unexecuted_blocks=1 00:04:59.943 00:04:59.943 ' 00:04:59.943 15:41:45 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:59.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.943 --rc genhtml_branch_coverage=1 00:04:59.943 --rc genhtml_function_coverage=1 00:04:59.943 --rc genhtml_legend=1 00:04:59.943 --rc geninfo_all_blocks=1 00:04:59.943 --rc geninfo_unexecuted_blocks=1 00:04:59.943 00:04:59.943 ' 00:04:59.943 15:41:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:59.943 OK 00:04:59.943 15:41:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.943 00:04:59.943 real 0m0.250s 00:04:59.943 user 0m0.119s 00:04:59.943 sys 0m0.145s 00:04:59.943 15:41:45 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.943 15:41:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:59.943 ************************************ 00:04:59.943 END TEST rpc_client 00:04:59.943 ************************************ 00:04:59.943 15:41:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.943 15:41:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.943 15:41:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.943 15:41:45 -- common/autotest_common.sh@10 -- # set +x 00:04:59.943 ************************************ 00:04:59.943 START TEST json_config 00:04:59.943 ************************************ 00:04:59.943 15:41:45 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.943 15:41:45 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:59.943 15:41:45 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:59.943 15:41:45 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.204 15:41:45 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.204 15:41:45 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.204 15:41:45 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.204 15:41:45 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.204 15:41:45 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.204 15:41:45 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.204 15:41:45 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.204 15:41:45 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.204 15:41:45 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.204 15:41:45 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.204 15:41:45 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.204 15:41:45 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.204 15:41:45 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:00.204 15:41:45 json_config -- scripts/common.sh@345 -- # : 1 00:05:00.204 15:41:45 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.204 15:41:45 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.204 15:41:45 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:00.204 15:41:45 json_config -- scripts/common.sh@353 -- # local d=1 00:05:00.204 15:41:45 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.204 15:41:45 json_config -- scripts/common.sh@355 -- # echo 1 00:05:00.204 15:41:45 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.204 15:41:45 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:00.204 15:41:45 json_config -- scripts/common.sh@353 -- # local d=2 00:05:00.204 15:41:45 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.204 15:41:45 json_config -- scripts/common.sh@355 -- # echo 2 00:05:00.204 15:41:45 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.204 15:41:45 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.204 15:41:45 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.204 15:41:45 json_config -- scripts/common.sh@368 -- # return 0 00:05:00.204 15:41:45 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.204 15:41:45 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.204 --rc genhtml_branch_coverage=1 00:05:00.204 --rc genhtml_function_coverage=1 00:05:00.204 --rc genhtml_legend=1 00:05:00.204 --rc geninfo_all_blocks=1 00:05:00.204 --rc geninfo_unexecuted_blocks=1 00:05:00.204 00:05:00.204 ' 00:05:00.204 15:41:45 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.204 --rc genhtml_branch_coverage=1 00:05:00.204 --rc genhtml_function_coverage=1 00:05:00.204 --rc genhtml_legend=1 00:05:00.204 --rc geninfo_all_blocks=1 00:05:00.204 --rc geninfo_unexecuted_blocks=1 00:05:00.204 00:05:00.204 ' 00:05:00.204 15:41:45 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.204 --rc genhtml_branch_coverage=1 00:05:00.204 --rc genhtml_function_coverage=1 00:05:00.204 --rc genhtml_legend=1 00:05:00.204 --rc geninfo_all_blocks=1 00:05:00.204 --rc geninfo_unexecuted_blocks=1 00:05:00.204 00:05:00.204 ' 00:05:00.204 15:41:45 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.204 --rc genhtml_branch_coverage=1 00:05:00.204 --rc genhtml_function_coverage=1 00:05:00.204 --rc genhtml_legend=1 00:05:00.204 --rc geninfo_all_blocks=1 00:05:00.204 --rc geninfo_unexecuted_blocks=1 00:05:00.204 00:05:00.204 ' 00:05:00.204 15:41:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.204 15:41:45 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:00.204 15:41:45 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.204 15:41:45 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.204 15:41:45 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.204 15:41:45 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.204 15:41:45 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.204 15:41:45 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.205 15:41:45 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.205 15:41:45 json_config -- paths/export.sh@5 -- # export PATH 00:05:00.205 15:41:45 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.205 15:41:45 json_config -- nvmf/common.sh@51 -- # : 0 00:05:00.205 15:41:45 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:00.205 15:41:45 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:00.205 15:41:45 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.205 15:41:45 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.205 15:41:45 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.205 15:41:45 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:00.205 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:00.205 15:41:45 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:00.205 15:41:45 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:00.205 15:41:45 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:00.205 INFO: JSON configuration test init 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:00.205 15:41:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.205 15:41:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:00.205 15:41:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.205 15:41:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.205 15:41:45 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:00.205 15:41:45 json_config -- json_config/common.sh@9 -- # local app=target 00:05:00.205 15:41:45 json_config -- json_config/common.sh@10 -- # shift 00:05:00.205 15:41:45 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.205 15:41:45 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.205 15:41:45 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.205 15:41:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.205 15:41:45 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.205 15:41:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3122018 00:05:00.205 15:41:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.205 Waiting for target to run... 00:05:00.205 15:41:45 json_config -- json_config/common.sh@25 -- # waitforlisten 3122018 /var/tmp/spdk_tgt.sock 00:05:00.205 15:41:45 json_config -- common/autotest_common.sh@835 -- # '[' -z 3122018 ']' 00:05:00.205 15:41:45 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.205 15:41:45 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.205 15:41:45 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:00.205 15:41:45 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.205 15:41:45 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.205 15:41:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.205 [2024-11-17 15:41:45.686151] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:00.205 [2024-11-17 15:41:45.686254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3122018 ] 00:05:00.774 [2024-11-17 15:41:46.032971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.774 [2024-11-17 15:41:46.120156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.033 15:41:46 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.033 15:41:46 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:01.033 15:41:46 json_config -- json_config/common.sh@26 -- # echo '' 00:05:01.033 00:05:01.033 15:41:46 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:01.033 15:41:46 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:01.033 15:41:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:01.033 15:41:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.033 15:41:46 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:01.033 15:41:46 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:01.033 15:41:46 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.033 15:41:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.033 15:41:46 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:01.033 15:41:46 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:01.033 15:41:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:05.221 15:41:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.221 15:41:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:05.221 15:41:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@54 -- # sort 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:05.221 15:41:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.221 15:41:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:05.221 15:41:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.221 15:41:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:05:05.221 15:41:50 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:05:05.221 15:41:50 json_config -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:05:05.221 15:41:50 json_config -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:05.221 15:41:50 json_config -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:05.221 15:41:50 json_config -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:05.221 15:41:50 json_config -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:05.221 15:41:50 json_config -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:05.221 15:41:50 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:05.221 15:41:50 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:05.221 15:41:50 json_config -- nvmf/common.sh@442 -- # [[ phy-fallback != virt ]] 00:05:05.221 15:41:50 json_config -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:05.221 15:41:50 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:05:05.221 15:41:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@320 -- # e810=() 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@321 -- # x722=() 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@322 -- # mlx=() 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:11.781 15:41:57 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:05:11.782 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:05:11.782 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:05:11.782 Found net devices under 0000:d9:00.0: mlx_0_0 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:05:11.782 Found net devices under 0000:d9:00.1: mlx_0_1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@442 -- # is_hw=yes 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@448 -- # rdma_device_init 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@62 -- # uname 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@530 -- # allocate_nic_ips 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:11.782 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:11.782 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:05:11.782 altname enp217s0f0np0 00:05:11.782 altname ens818f0np0 00:05:11.782 inet 192.168.100.8/24 scope global mlx_0_0 00:05:11.782 valid_lft forever preferred_lft forever 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:11.782 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:11.782 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:05:11.782 altname enp217s0f1np1 00:05:11.782 altname ens818f1np1 00:05:11.782 inet 192.168.100.9/24 scope global mlx_0_1 00:05:11.782 valid_lft forever preferred_lft forever 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@450 -- # return 0 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@109 -- # continue 2 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:05:11.782 192.168.100.9' 00:05:11.782 15:41:57 json_config -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:05:11.782 192.168.100.9' 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@485 -- # head -n 1 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@486 -- # head -n 1 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:05:11.783 192.168.100.9' 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@486 -- # tail -n +2 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:05:11.783 15:41:57 json_config -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:05:12.041 15:41:57 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:05:12.041 15:41:57 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:12.041 15:41:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:12.041 MallocForNvmf0 00:05:12.041 15:41:57 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:12.041 15:41:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:12.299 MallocForNvmf1 00:05:12.299 15:41:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:12.299 15:41:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:12.558 [2024-11-17 15:41:57.871717] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:12.558 [2024-11-17 15:41:57.905069] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7f66ddeba940) succeed. 00:05:12.558 [2024-11-17 15:41:57.917490] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7f66dde76940) succeed. 00:05:12.558 15:41:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:12.558 15:41:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:12.816 15:41:58 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:12.816 15:41:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:12.816 15:41:58 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:12.816 15:41:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:13.075 15:41:58 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:13.075 15:41:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:13.333 [2024-11-17 15:41:58.691414] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:13.333 15:41:58 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:13.333 15:41:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.333 15:41:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.333 15:41:58 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:13.333 15:41:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.333 15:41:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.333 15:41:58 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:13.333 15:41:58 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:13.333 15:41:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:13.592 MallocBdevForConfigChangeCheck 00:05:13.592 15:41:58 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:13.592 15:41:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:13.592 15:41:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.592 15:41:59 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:13.592 15:41:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.849 15:41:59 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:13.849 INFO: shutting down applications... 00:05:13.849 15:41:59 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:13.849 15:41:59 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:13.849 15:41:59 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:13.849 15:41:59 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:16.380 Calling clear_iscsi_subsystem 00:05:16.380 Calling clear_nvmf_subsystem 00:05:16.380 Calling clear_nbd_subsystem 00:05:16.380 Calling clear_ublk_subsystem 00:05:16.380 Calling clear_vhost_blk_subsystem 00:05:16.380 Calling clear_vhost_scsi_subsystem 00:05:16.380 Calling clear_bdev_subsystem 00:05:16.638 15:42:01 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:16.638 15:42:01 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:16.638 15:42:01 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:16.638 15:42:01 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.638 15:42:01 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:16.638 15:42:01 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:16.897 15:42:02 json_config -- json_config/json_config.sh@352 -- # break 00:05:16.897 15:42:02 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:16.897 15:42:02 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:16.897 15:42:02 json_config -- json_config/common.sh@31 -- # local app=target 00:05:16.897 15:42:02 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:16.897 15:42:02 json_config -- json_config/common.sh@35 -- # [[ -n 3122018 ]] 00:05:16.897 15:42:02 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3122018 00:05:16.897 15:42:02 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:16.897 15:42:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.897 15:42:02 json_config -- json_config/common.sh@41 -- # kill -0 3122018 00:05:16.897 15:42:02 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.464 15:42:02 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.464 15:42:02 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.464 15:42:02 json_config -- json_config/common.sh@41 -- # kill -0 3122018 00:05:17.464 15:42:02 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.031 15:42:03 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.031 15:42:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.031 15:42:03 json_config -- json_config/common.sh@41 -- # kill -0 3122018 00:05:18.031 15:42:03 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:18.031 15:42:03 json_config -- json_config/common.sh@43 -- # break 00:05:18.031 15:42:03 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:18.031 15:42:03 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:18.031 SPDK target shutdown done 00:05:18.031 15:42:03 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:18.031 INFO: relaunching applications... 00:05:18.031 15:42:03 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.031 15:42:03 json_config -- json_config/common.sh@9 -- # local app=target 00:05:18.031 15:42:03 json_config -- json_config/common.sh@10 -- # shift 00:05:18.031 15:42:03 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.031 15:42:03 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.031 15:42:03 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.031 15:42:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.031 15:42:03 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.031 15:42:03 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3127323 00:05:18.031 15:42:03 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.031 Waiting for target to run... 00:05:18.031 15:42:03 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.031 15:42:03 json_config -- json_config/common.sh@25 -- # waitforlisten 3127323 /var/tmp/spdk_tgt.sock 00:05:18.031 15:42:03 json_config -- common/autotest_common.sh@835 -- # '[' -z 3127323 ']' 00:05:18.031 15:42:03 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.031 15:42:03 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.031 15:42:03 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.031 15:42:03 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.031 15:42:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.031 [2024-11-17 15:42:03.389992] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:18.031 [2024-11-17 15:42:03.390087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127323 ] 00:05:18.290 [2024-11-17 15:42:03.747147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.549 [2024-11-17 15:42:03.841950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.742 [2024-11-17 15:42:07.469547] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a1c0/0x7f3101f84940) succeed. 00:05:22.742 [2024-11-17 15:42:07.481062] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a340/0x7f3101f40940) succeed. 00:05:22.742 [2024-11-17 15:42:07.541975] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:22.742 15:42:07 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.742 15:42:07 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:22.742 15:42:07 json_config -- json_config/common.sh@26 -- # echo '' 00:05:22.742 00:05:22.742 15:42:07 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:22.742 15:42:07 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:22.742 INFO: Checking if target configuration is the same... 00:05:22.742 15:42:07 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.742 15:42:07 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:22.742 15:42:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.742 + '[' 2 -ne 2 ']' 00:05:22.742 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:22.742 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:22.742 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:22.742 +++ basename /dev/fd/62 00:05:22.742 ++ mktemp /tmp/62.XXX 00:05:22.742 + tmp_file_1=/tmp/62.IXI 00:05:22.742 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.742 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.742 + tmp_file_2=/tmp/spdk_tgt_config.json.og2 00:05:22.742 + ret=0 00:05:22.742 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.742 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:22.742 + diff -u /tmp/62.IXI /tmp/spdk_tgt_config.json.og2 00:05:22.742 + echo 'INFO: JSON config files are the same' 00:05:22.742 INFO: JSON config files are the same 00:05:22.742 + rm /tmp/62.IXI /tmp/spdk_tgt_config.json.og2 00:05:22.742 + exit 0 00:05:22.742 15:42:07 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:22.742 15:42:07 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:22.742 INFO: changing configuration and checking if this can be detected... 00:05:22.742 15:42:07 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.742 15:42:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:22.742 15:42:08 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:22.742 15:42:08 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.742 15:42:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:22.742 + '[' 2 -ne 2 ']' 00:05:22.742 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:22.742 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:05:22.742 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:05:22.742 +++ basename /dev/fd/62 00:05:22.742 ++ mktemp /tmp/62.XXX 00:05:22.742 + tmp_file_1=/tmp/62.CnD 00:05:22.742 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.742 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:22.742 + tmp_file_2=/tmp/spdk_tgt_config.json.7et 00:05:22.742 + ret=0 00:05:22.742 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.002 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.261 + diff -u /tmp/62.CnD /tmp/spdk_tgt_config.json.7et 00:05:23.261 + ret=1 00:05:23.261 + echo '=== Start of file: /tmp/62.CnD ===' 00:05:23.261 + cat /tmp/62.CnD 00:05:23.261 + echo '=== End of file: /tmp/62.CnD ===' 00:05:23.261 + echo '' 00:05:23.261 + echo '=== Start of file: /tmp/spdk_tgt_config.json.7et ===' 00:05:23.261 + cat /tmp/spdk_tgt_config.json.7et 00:05:23.262 + echo '=== End of file: /tmp/spdk_tgt_config.json.7et ===' 00:05:23.262 + echo '' 00:05:23.262 + rm /tmp/62.CnD /tmp/spdk_tgt_config.json.7et 00:05:23.262 + exit 1 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:23.262 INFO: configuration change detected. 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@324 -- # [[ -n 3127323 ]] 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.262 15:42:08 json_config -- json_config/json_config.sh@330 -- # killprocess 3127323 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@954 -- # '[' -z 3127323 ']' 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@958 -- # kill -0 3127323 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@959 -- # uname 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3127323 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3127323' 00:05:23.262 killing process with pid 3127323 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@973 -- # kill 3127323 00:05:23.262 15:42:08 json_config -- common/autotest_common.sh@978 -- # wait 3127323 00:05:26.568 15:42:11 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.568 15:42:11 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:26.568 15:42:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.568 15:42:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.568 15:42:12 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:26.568 15:42:12 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:26.568 INFO: Success 00:05:26.568 15:42:12 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:05:26.568 15:42:12 json_config -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:26.568 15:42:12 json_config -- nvmf/common.sh@121 -- # sync 00:05:26.568 15:42:12 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:05:26.568 15:42:12 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:05:26.568 15:42:12 json_config -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:05:26.568 15:42:12 json_config -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:26.568 15:42:12 json_config -- nvmf/common.sh@523 -- # [[ '' == \t\c\p ]] 00:05:26.568 00:05:26.568 real 0m26.622s 00:05:26.568 user 0m28.916s 00:05:26.568 sys 0m8.238s 00:05:26.568 15:42:12 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.568 15:42:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.568 ************************************ 00:05:26.568 END TEST json_config 00:05:26.568 ************************************ 00:05:26.568 15:42:12 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.568 15:42:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.568 15:42:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.568 15:42:12 -- common/autotest_common.sh@10 -- # set +x 00:05:26.568 ************************************ 00:05:26.568 START TEST json_config_extra_key 00:05:26.568 ************************************ 00:05:26.829 15:42:12 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.829 15:42:12 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.829 15:42:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.829 15:42:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.829 15:42:12 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:26.829 15:42:12 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.829 15:42:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.829 --rc genhtml_branch_coverage=1 00:05:26.829 --rc genhtml_function_coverage=1 00:05:26.829 --rc genhtml_legend=1 00:05:26.829 --rc geninfo_all_blocks=1 00:05:26.829 --rc geninfo_unexecuted_blocks=1 00:05:26.829 00:05:26.829 ' 00:05:26.829 15:42:12 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.829 --rc genhtml_branch_coverage=1 00:05:26.829 --rc genhtml_function_coverage=1 00:05:26.829 --rc genhtml_legend=1 00:05:26.829 --rc geninfo_all_blocks=1 00:05:26.829 --rc geninfo_unexecuted_blocks=1 00:05:26.829 00:05:26.829 ' 00:05:26.829 15:42:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.829 --rc genhtml_branch_coverage=1 00:05:26.829 --rc genhtml_function_coverage=1 00:05:26.829 --rc genhtml_legend=1 00:05:26.829 --rc geninfo_all_blocks=1 00:05:26.829 --rc geninfo_unexecuted_blocks=1 00:05:26.829 00:05:26.829 ' 00:05:26.829 15:42:12 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.829 --rc genhtml_branch_coverage=1 00:05:26.829 --rc genhtml_function_coverage=1 00:05:26.829 --rc genhtml_legend=1 00:05:26.829 --rc geninfo_all_blocks=1 00:05:26.829 --rc geninfo_unexecuted_blocks=1 00:05:26.829 00:05:26.829 ' 00:05:26.829 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.829 15:42:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.829 15:42:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.829 15:42:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.829 15:42:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.830 15:42:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.830 15:42:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:26.830 15:42:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.830 15:42:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:26.830 15:42:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.830 15:42:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.830 15:42:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.830 15:42:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.830 15:42:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.830 15:42:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.830 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.830 15:42:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.830 15:42:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.830 15:42:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:26.830 INFO: launching applications... 00:05:26.830 15:42:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3129400 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.830 Waiting for target to run... 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3129400 /var/tmp/spdk_tgt.sock 00:05:26.830 15:42:12 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3129400 ']' 00:05:26.830 15:42:12 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.830 15:42:12 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.830 15:42:12 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.830 15:42:12 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.830 15:42:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.830 15:42:12 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:05:27.089 [2024-11-17 15:42:12.387570] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:27.089 [2024-11-17 15:42:12.387665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129400 ] 00:05:27.348 [2024-11-17 15:42:12.743434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.348 [2024-11-17 15:42:12.833910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.917 15:42:13 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.917 15:42:13 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:27.917 15:42:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:27.917 00:05:27.917 15:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:27.917 INFO: shutting down applications... 00:05:27.917 15:42:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:27.917 15:42:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:27.917 15:42:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.917 15:42:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3129400 ]] 00:05:27.917 15:42:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3129400 00:05:27.917 15:42:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.917 15:42:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.917 15:42:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3129400 00:05:27.917 15:42:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.486 15:42:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.486 15:42:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.486 15:42:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3129400 00:05:28.486 15:42:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.054 15:42:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.054 15:42:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.054 15:42:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3129400 00:05:29.054 15:42:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.623 15:42:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.623 15:42:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.623 15:42:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3129400 00:05:29.623 15:42:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:29.882 15:42:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:29.882 15:42:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.882 15:42:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3129400 00:05:29.882 15:42:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.451 15:42:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.451 15:42:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.451 15:42:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3129400 00:05:30.451 15:42:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:30.451 15:42:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:30.451 15:42:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:30.451 15:42:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:30.451 SPDK target shutdown done 00:05:30.451 15:42:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:30.451 Success 00:05:30.451 00:05:30.451 real 0m3.806s 00:05:30.451 user 0m3.507s 00:05:30.451 sys 0m0.605s 00:05:30.451 15:42:15 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.451 15:42:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:30.451 ************************************ 00:05:30.451 END TEST json_config_extra_key 00:05:30.451 ************************************ 00:05:30.451 15:42:15 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.451 15:42:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.451 15:42:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.451 15:42:15 -- common/autotest_common.sh@10 -- # set +x 00:05:30.451 ************************************ 00:05:30.451 START TEST alias_rpc 00:05:30.451 ************************************ 00:05:30.451 15:42:15 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:30.711 * Looking for test storage... 00:05:30.711 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.711 15:42:16 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.711 --rc genhtml_branch_coverage=1 00:05:30.711 --rc genhtml_function_coverage=1 00:05:30.711 --rc genhtml_legend=1 00:05:30.711 --rc geninfo_all_blocks=1 00:05:30.711 --rc geninfo_unexecuted_blocks=1 00:05:30.711 00:05:30.711 ' 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.711 --rc genhtml_branch_coverage=1 00:05:30.711 --rc genhtml_function_coverage=1 00:05:30.711 --rc genhtml_legend=1 00:05:30.711 --rc geninfo_all_blocks=1 00:05:30.711 --rc geninfo_unexecuted_blocks=1 00:05:30.711 00:05:30.711 ' 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.711 --rc genhtml_branch_coverage=1 00:05:30.711 --rc genhtml_function_coverage=1 00:05:30.711 --rc genhtml_legend=1 00:05:30.711 --rc geninfo_all_blocks=1 00:05:30.711 --rc geninfo_unexecuted_blocks=1 00:05:30.711 00:05:30.711 ' 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.711 --rc genhtml_branch_coverage=1 00:05:30.711 --rc genhtml_function_coverage=1 00:05:30.711 --rc genhtml_legend=1 00:05:30.711 --rc geninfo_all_blocks=1 00:05:30.711 --rc geninfo_unexecuted_blocks=1 00:05:30.711 00:05:30.711 ' 00:05:30.711 15:42:16 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:30.711 15:42:16 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3130253 00:05:30.711 15:42:16 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3130253 00:05:30.711 15:42:16 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3130253 ']' 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.711 15:42:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.971 [2024-11-17 15:42:16.271928] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:30.971 [2024-11-17 15:42:16.272015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130253 ] 00:05:30.971 [2024-11-17 15:42:16.402034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.971 [2024-11-17 15:42:16.500482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.022 15:42:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:32.022 15:42:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3130253 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3130253 ']' 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3130253 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130253 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130253' 00:05:32.022 killing process with pid 3130253 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@973 -- # kill 3130253 00:05:32.022 15:42:17 alias_rpc -- common/autotest_common.sh@978 -- # wait 3130253 00:05:34.559 00:05:34.559 real 0m3.667s 00:05:34.559 user 0m3.643s 00:05:34.559 sys 0m0.634s 00:05:34.559 15:42:19 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.559 15:42:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.559 ************************************ 00:05:34.559 END TEST alias_rpc 00:05:34.559 ************************************ 00:05:34.559 15:42:19 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:34.559 15:42:19 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.559 15:42:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.559 15:42:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.559 15:42:19 -- common/autotest_common.sh@10 -- # set +x 00:05:34.559 ************************************ 00:05:34.559 START TEST spdkcli_tcp 00:05:34.559 ************************************ 00:05:34.559 15:42:19 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.559 * Looking for test storage... 00:05:34.559 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:05:34.559 15:42:19 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.559 15:42:19 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.559 15:42:19 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.559 15:42:19 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.559 15:42:19 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.560 15:42:19 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.560 --rc genhtml_branch_coverage=1 00:05:34.560 --rc genhtml_function_coverage=1 00:05:34.560 --rc genhtml_legend=1 00:05:34.560 --rc geninfo_all_blocks=1 00:05:34.560 --rc geninfo_unexecuted_blocks=1 00:05:34.560 00:05:34.560 ' 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.560 --rc genhtml_branch_coverage=1 00:05:34.560 --rc genhtml_function_coverage=1 00:05:34.560 --rc genhtml_legend=1 00:05:34.560 --rc geninfo_all_blocks=1 00:05:34.560 --rc geninfo_unexecuted_blocks=1 00:05:34.560 00:05:34.560 ' 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.560 --rc genhtml_branch_coverage=1 00:05:34.560 --rc genhtml_function_coverage=1 00:05:34.560 --rc genhtml_legend=1 00:05:34.560 --rc geninfo_all_blocks=1 00:05:34.560 --rc geninfo_unexecuted_blocks=1 00:05:34.560 00:05:34.560 ' 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.560 --rc genhtml_branch_coverage=1 00:05:34.560 --rc genhtml_function_coverage=1 00:05:34.560 --rc genhtml_legend=1 00:05:34.560 --rc geninfo_all_blocks=1 00:05:34.560 --rc geninfo_unexecuted_blocks=1 00:05:34.560 00:05:34.560 ' 00:05:34.560 15:42:19 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:05:34.560 15:42:19 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:34.560 15:42:19 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:05:34.560 15:42:19 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:34.560 15:42:19 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:34.560 15:42:19 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:34.560 15:42:19 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.560 15:42:19 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3130870 00:05:34.560 15:42:19 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:34.560 15:42:19 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3130870 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3130870 ']' 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.560 15:42:19 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.560 [2024-11-17 15:42:20.015150] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:34.560 [2024-11-17 15:42:20.015252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3130870 ] 00:05:34.818 [2024-11-17 15:42:20.151416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.818 [2024-11-17 15:42:20.258294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.818 [2024-11-17 15:42:20.258303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.754 15:42:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.754 15:42:21 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:35.754 15:42:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:35.754 15:42:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3131135 00:05:35.754 15:42:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:35.754 [ 00:05:35.754 "bdev_malloc_delete", 00:05:35.754 "bdev_malloc_create", 00:05:35.754 "bdev_null_resize", 00:05:35.754 "bdev_null_delete", 00:05:35.754 "bdev_null_create", 00:05:35.754 "bdev_nvme_cuse_unregister", 00:05:35.754 "bdev_nvme_cuse_register", 00:05:35.754 "bdev_opal_new_user", 00:05:35.754 "bdev_opal_set_lock_state", 00:05:35.754 "bdev_opal_delete", 00:05:35.754 "bdev_opal_get_info", 00:05:35.754 "bdev_opal_create", 00:05:35.755 "bdev_nvme_opal_revert", 00:05:35.755 "bdev_nvme_opal_init", 00:05:35.755 "bdev_nvme_send_cmd", 00:05:35.755 "bdev_nvme_set_keys", 00:05:35.755 "bdev_nvme_get_path_iostat", 00:05:35.755 "bdev_nvme_get_mdns_discovery_info", 00:05:35.755 "bdev_nvme_stop_mdns_discovery", 00:05:35.755 "bdev_nvme_start_mdns_discovery", 00:05:35.755 "bdev_nvme_set_multipath_policy", 00:05:35.755 "bdev_nvme_set_preferred_path", 00:05:35.755 "bdev_nvme_get_io_paths", 00:05:35.755 "bdev_nvme_remove_error_injection", 00:05:35.755 "bdev_nvme_add_error_injection", 00:05:35.755 "bdev_nvme_get_discovery_info", 00:05:35.755 "bdev_nvme_stop_discovery", 00:05:35.755 "bdev_nvme_start_discovery", 00:05:35.755 "bdev_nvme_get_controller_health_info", 00:05:35.755 "bdev_nvme_disable_controller", 00:05:35.755 "bdev_nvme_enable_controller", 00:05:35.755 "bdev_nvme_reset_controller", 00:05:35.755 "bdev_nvme_get_transport_statistics", 00:05:35.755 "bdev_nvme_apply_firmware", 00:05:35.755 "bdev_nvme_detach_controller", 00:05:35.755 "bdev_nvme_get_controllers", 00:05:35.755 "bdev_nvme_attach_controller", 00:05:35.755 "bdev_nvme_set_hotplug", 00:05:35.755 "bdev_nvme_set_options", 00:05:35.755 "bdev_passthru_delete", 00:05:35.755 "bdev_passthru_create", 00:05:35.755 "bdev_lvol_set_parent_bdev", 00:05:35.755 "bdev_lvol_set_parent", 00:05:35.755 "bdev_lvol_check_shallow_copy", 00:05:35.755 "bdev_lvol_start_shallow_copy", 00:05:35.755 "bdev_lvol_grow_lvstore", 00:05:35.755 "bdev_lvol_get_lvols", 00:05:35.755 "bdev_lvol_get_lvstores", 00:05:35.755 "bdev_lvol_delete", 00:05:35.755 "bdev_lvol_set_read_only", 00:05:35.755 "bdev_lvol_resize", 00:05:35.755 "bdev_lvol_decouple_parent", 00:05:35.755 "bdev_lvol_inflate", 00:05:35.755 "bdev_lvol_rename", 00:05:35.755 "bdev_lvol_clone_bdev", 00:05:35.755 "bdev_lvol_clone", 00:05:35.755 "bdev_lvol_snapshot", 00:05:35.755 "bdev_lvol_create", 00:05:35.755 "bdev_lvol_delete_lvstore", 00:05:35.755 "bdev_lvol_rename_lvstore", 00:05:35.755 "bdev_lvol_create_lvstore", 00:05:35.755 "bdev_raid_set_options", 00:05:35.755 "bdev_raid_remove_base_bdev", 00:05:35.755 "bdev_raid_add_base_bdev", 00:05:35.755 "bdev_raid_delete", 00:05:35.755 "bdev_raid_create", 00:05:35.755 "bdev_raid_get_bdevs", 00:05:35.755 "bdev_error_inject_error", 00:05:35.755 "bdev_error_delete", 00:05:35.755 "bdev_error_create", 00:05:35.755 "bdev_split_delete", 00:05:35.755 "bdev_split_create", 00:05:35.755 "bdev_delay_delete", 00:05:35.755 "bdev_delay_create", 00:05:35.755 "bdev_delay_update_latency", 00:05:35.755 "bdev_zone_block_delete", 00:05:35.755 "bdev_zone_block_create", 00:05:35.755 "blobfs_create", 00:05:35.755 "blobfs_detect", 00:05:35.755 "blobfs_set_cache_size", 00:05:35.755 "bdev_aio_delete", 00:05:35.755 "bdev_aio_rescan", 00:05:35.755 "bdev_aio_create", 00:05:35.755 "bdev_ftl_set_property", 00:05:35.755 "bdev_ftl_get_properties", 00:05:35.755 "bdev_ftl_get_stats", 00:05:35.755 "bdev_ftl_unmap", 00:05:35.755 "bdev_ftl_unload", 00:05:35.755 "bdev_ftl_delete", 00:05:35.755 "bdev_ftl_load", 00:05:35.755 "bdev_ftl_create", 00:05:35.755 "bdev_virtio_attach_controller", 00:05:35.755 "bdev_virtio_scsi_get_devices", 00:05:35.755 "bdev_virtio_detach_controller", 00:05:35.755 "bdev_virtio_blk_set_hotplug", 00:05:35.755 "bdev_iscsi_delete", 00:05:35.755 "bdev_iscsi_create", 00:05:35.755 "bdev_iscsi_set_options", 00:05:35.755 "accel_error_inject_error", 00:05:35.755 "ioat_scan_accel_module", 00:05:35.755 "dsa_scan_accel_module", 00:05:35.755 "iaa_scan_accel_module", 00:05:35.755 "keyring_file_remove_key", 00:05:35.755 "keyring_file_add_key", 00:05:35.755 "keyring_linux_set_options", 00:05:35.755 "fsdev_aio_delete", 00:05:35.755 "fsdev_aio_create", 00:05:35.755 "iscsi_get_histogram", 00:05:35.755 "iscsi_enable_histogram", 00:05:35.755 "iscsi_set_options", 00:05:35.755 "iscsi_get_auth_groups", 00:05:35.755 "iscsi_auth_group_remove_secret", 00:05:35.755 "iscsi_auth_group_add_secret", 00:05:35.755 "iscsi_delete_auth_group", 00:05:35.755 "iscsi_create_auth_group", 00:05:35.755 "iscsi_set_discovery_auth", 00:05:35.755 "iscsi_get_options", 00:05:35.755 "iscsi_target_node_request_logout", 00:05:35.755 "iscsi_target_node_set_redirect", 00:05:35.755 "iscsi_target_node_set_auth", 00:05:35.755 "iscsi_target_node_add_lun", 00:05:35.755 "iscsi_get_stats", 00:05:35.755 "iscsi_get_connections", 00:05:35.755 "iscsi_portal_group_set_auth", 00:05:35.755 "iscsi_start_portal_group", 00:05:35.755 "iscsi_delete_portal_group", 00:05:35.755 "iscsi_create_portal_group", 00:05:35.755 "iscsi_get_portal_groups", 00:05:35.755 "iscsi_delete_target_node", 00:05:35.755 "iscsi_target_node_remove_pg_ig_maps", 00:05:35.755 "iscsi_target_node_add_pg_ig_maps", 00:05:35.755 "iscsi_create_target_node", 00:05:35.755 "iscsi_get_target_nodes", 00:05:35.755 "iscsi_delete_initiator_group", 00:05:35.755 "iscsi_initiator_group_remove_initiators", 00:05:35.755 "iscsi_initiator_group_add_initiators", 00:05:35.755 "iscsi_create_initiator_group", 00:05:35.755 "iscsi_get_initiator_groups", 00:05:35.755 "nvmf_set_crdt", 00:05:35.755 "nvmf_set_config", 00:05:35.755 "nvmf_set_max_subsystems", 00:05:35.755 "nvmf_stop_mdns_prr", 00:05:35.755 "nvmf_publish_mdns_prr", 00:05:35.755 "nvmf_subsystem_get_listeners", 00:05:35.755 "nvmf_subsystem_get_qpairs", 00:05:35.755 "nvmf_subsystem_get_controllers", 00:05:35.755 "nvmf_get_stats", 00:05:35.755 "nvmf_get_transports", 00:05:35.755 "nvmf_create_transport", 00:05:35.755 "nvmf_get_targets", 00:05:35.755 "nvmf_delete_target", 00:05:35.755 "nvmf_create_target", 00:05:35.755 "nvmf_subsystem_allow_any_host", 00:05:35.755 "nvmf_subsystem_set_keys", 00:05:35.755 "nvmf_subsystem_remove_host", 00:05:35.755 "nvmf_subsystem_add_host", 00:05:35.755 "nvmf_ns_remove_host", 00:05:35.755 "nvmf_ns_add_host", 00:05:35.755 "nvmf_subsystem_remove_ns", 00:05:35.755 "nvmf_subsystem_set_ns_ana_group", 00:05:35.755 "nvmf_subsystem_add_ns", 00:05:35.755 "nvmf_subsystem_listener_set_ana_state", 00:05:35.755 "nvmf_discovery_get_referrals", 00:05:35.755 "nvmf_discovery_remove_referral", 00:05:35.755 "nvmf_discovery_add_referral", 00:05:35.755 "nvmf_subsystem_remove_listener", 00:05:35.755 "nvmf_subsystem_add_listener", 00:05:35.755 "nvmf_delete_subsystem", 00:05:35.755 "nvmf_create_subsystem", 00:05:35.755 "nvmf_get_subsystems", 00:05:35.755 "env_dpdk_get_mem_stats", 00:05:35.755 "nbd_get_disks", 00:05:35.755 "nbd_stop_disk", 00:05:35.755 "nbd_start_disk", 00:05:35.755 "ublk_recover_disk", 00:05:35.755 "ublk_get_disks", 00:05:35.755 "ublk_stop_disk", 00:05:35.755 "ublk_start_disk", 00:05:35.755 "ublk_destroy_target", 00:05:35.755 "ublk_create_target", 00:05:35.755 "virtio_blk_create_transport", 00:05:35.755 "virtio_blk_get_transports", 00:05:35.755 "vhost_controller_set_coalescing", 00:05:35.755 "vhost_get_controllers", 00:05:35.755 "vhost_delete_controller", 00:05:35.755 "vhost_create_blk_controller", 00:05:35.755 "vhost_scsi_controller_remove_target", 00:05:35.755 "vhost_scsi_controller_add_target", 00:05:35.755 "vhost_start_scsi_controller", 00:05:35.755 "vhost_create_scsi_controller", 00:05:35.755 "thread_set_cpumask", 00:05:35.755 "scheduler_set_options", 00:05:35.755 "framework_get_governor", 00:05:35.755 "framework_get_scheduler", 00:05:35.755 "framework_set_scheduler", 00:05:35.755 "framework_get_reactors", 00:05:35.755 "thread_get_io_channels", 00:05:35.755 "thread_get_pollers", 00:05:35.755 "thread_get_stats", 00:05:35.755 "framework_monitor_context_switch", 00:05:35.755 "spdk_kill_instance", 00:05:35.755 "log_enable_timestamps", 00:05:35.755 "log_get_flags", 00:05:35.755 "log_clear_flag", 00:05:35.755 "log_set_flag", 00:05:35.755 "log_get_level", 00:05:35.755 "log_set_level", 00:05:35.755 "log_get_print_level", 00:05:35.755 "log_set_print_level", 00:05:35.755 "framework_enable_cpumask_locks", 00:05:35.755 "framework_disable_cpumask_locks", 00:05:35.755 "framework_wait_init", 00:05:35.755 "framework_start_init", 00:05:35.755 "scsi_get_devices", 00:05:35.755 "bdev_get_histogram", 00:05:35.755 "bdev_enable_histogram", 00:05:35.755 "bdev_set_qos_limit", 00:05:35.755 "bdev_set_qd_sampling_period", 00:05:35.755 "bdev_get_bdevs", 00:05:35.755 "bdev_reset_iostat", 00:05:35.755 "bdev_get_iostat", 00:05:35.755 "bdev_examine", 00:05:35.755 "bdev_wait_for_examine", 00:05:35.755 "bdev_set_options", 00:05:35.755 "accel_get_stats", 00:05:35.755 "accel_set_options", 00:05:35.755 "accel_set_driver", 00:05:35.755 "accel_crypto_key_destroy", 00:05:35.755 "accel_crypto_keys_get", 00:05:35.755 "accel_crypto_key_create", 00:05:35.755 "accel_assign_opc", 00:05:35.755 "accel_get_module_info", 00:05:35.755 "accel_get_opc_assignments", 00:05:35.755 "vmd_rescan", 00:05:35.755 "vmd_remove_device", 00:05:35.755 "vmd_enable", 00:05:35.755 "sock_get_default_impl", 00:05:35.755 "sock_set_default_impl", 00:05:35.755 "sock_impl_set_options", 00:05:35.755 "sock_impl_get_options", 00:05:35.755 "iobuf_get_stats", 00:05:35.755 "iobuf_set_options", 00:05:35.755 "keyring_get_keys", 00:05:35.756 "framework_get_pci_devices", 00:05:35.756 "framework_get_config", 00:05:35.756 "framework_get_subsystems", 00:05:35.756 "fsdev_set_opts", 00:05:35.756 "fsdev_get_opts", 00:05:35.756 "trace_get_info", 00:05:35.756 "trace_get_tpoint_group_mask", 00:05:35.756 "trace_disable_tpoint_group", 00:05:35.756 "trace_enable_tpoint_group", 00:05:35.756 "trace_clear_tpoint_mask", 00:05:35.756 "trace_set_tpoint_mask", 00:05:35.756 "notify_get_notifications", 00:05:35.756 "notify_get_types", 00:05:35.756 "spdk_get_version", 00:05:35.756 "rpc_get_methods" 00:05:35.756 ] 00:05:35.756 15:42:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:35.756 15:42:21 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:35.756 15:42:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.756 15:42:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:35.756 15:42:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3130870 00:05:35.756 15:42:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3130870 ']' 00:05:35.756 15:42:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3130870 00:05:35.756 15:42:21 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:35.756 15:42:21 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.756 15:42:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3130870 00:05:36.014 15:42:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.014 15:42:21 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.014 15:42:21 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3130870' 00:05:36.014 killing process with pid 3130870 00:05:36.014 15:42:21 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3130870 00:05:36.014 15:42:21 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3130870 00:05:38.547 00:05:38.547 real 0m3.804s 00:05:38.547 user 0m6.804s 00:05:38.547 sys 0m0.690s 00:05:38.547 15:42:23 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.547 15:42:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.547 ************************************ 00:05:38.547 END TEST spdkcli_tcp 00:05:38.547 ************************************ 00:05:38.547 15:42:23 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.547 15:42:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.547 15:42:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.547 15:42:23 -- common/autotest_common.sh@10 -- # set +x 00:05:38.547 ************************************ 00:05:38.547 START TEST dpdk_mem_utility 00:05:38.547 ************************************ 00:05:38.547 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:38.547 * Looking for test storage... 00:05:38.547 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:05:38.547 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.547 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.547 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.547 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.547 15:42:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:38.547 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.547 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.547 --rc genhtml_branch_coverage=1 00:05:38.547 --rc genhtml_function_coverage=1 00:05:38.547 --rc genhtml_legend=1 00:05:38.547 --rc geninfo_all_blocks=1 00:05:38.547 --rc geninfo_unexecuted_blocks=1 00:05:38.547 00:05:38.547 ' 00:05:38.547 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.547 --rc genhtml_branch_coverage=1 00:05:38.548 --rc genhtml_function_coverage=1 00:05:38.548 --rc genhtml_legend=1 00:05:38.548 --rc geninfo_all_blocks=1 00:05:38.548 --rc geninfo_unexecuted_blocks=1 00:05:38.548 00:05:38.548 ' 00:05:38.548 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.548 --rc genhtml_branch_coverage=1 00:05:38.548 --rc genhtml_function_coverage=1 00:05:38.548 --rc genhtml_legend=1 00:05:38.548 --rc geninfo_all_blocks=1 00:05:38.548 --rc geninfo_unexecuted_blocks=1 00:05:38.548 00:05:38.548 ' 00:05:38.548 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.548 --rc genhtml_branch_coverage=1 00:05:38.548 --rc genhtml_function_coverage=1 00:05:38.548 --rc genhtml_legend=1 00:05:38.548 --rc geninfo_all_blocks=1 00:05:38.548 --rc geninfo_unexecuted_blocks=1 00:05:38.548 00:05:38.548 ' 00:05:38.548 15:42:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:38.548 15:42:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3131730 00:05:38.548 15:42:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3131730 00:05:38.548 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3131730 ']' 00:05:38.548 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.548 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.548 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.548 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.548 15:42:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.548 15:42:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:38.548 [2024-11-17 15:42:23.908629] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:38.548 [2024-11-17 15:42:23.908723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131730 ] 00:05:38.548 [2024-11-17 15:42:24.038500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.806 [2024-11-17 15:42:24.134170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.374 15:42:24 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.374 15:42:24 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:39.374 15:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:39.374 15:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:39.374 15:42:24 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.374 15:42:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:39.374 { 00:05:39.374 "filename": "/tmp/spdk_mem_dump.txt" 00:05:39.374 } 00:05:39.374 15:42:24 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.374 15:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:39.374 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:39.374 1 heaps totaling size 816.000000 MiB 00:05:39.374 size: 816.000000 MiB heap id: 0 00:05:39.374 end heaps---------- 00:05:39.374 9 mempools totaling size 595.772034 MiB 00:05:39.374 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:39.374 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:39.374 size: 92.545471 MiB name: bdev_io_3131730 00:05:39.374 size: 50.003479 MiB name: msgpool_3131730 00:05:39.374 size: 36.509338 MiB name: fsdev_io_3131730 00:05:39.374 size: 21.763794 MiB name: PDU_Pool 00:05:39.374 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:39.374 size: 4.133484 MiB name: evtpool_3131730 00:05:39.374 size: 0.026123 MiB name: Session_Pool 00:05:39.374 end mempools------- 00:05:39.374 6 memzones totaling size 4.142822 MiB 00:05:39.374 size: 1.000366 MiB name: RG_ring_0_3131730 00:05:39.374 size: 1.000366 MiB name: RG_ring_1_3131730 00:05:39.374 size: 1.000366 MiB name: RG_ring_4_3131730 00:05:39.374 size: 1.000366 MiB name: RG_ring_5_3131730 00:05:39.374 size: 0.125366 MiB name: RG_ring_2_3131730 00:05:39.374 size: 0.015991 MiB name: RG_ring_3_3131730 00:05:39.374 end memzones------- 00:05:39.374 15:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:39.633 heap id: 0 total size: 816.000000 MiB number of busy elements: 44 number of free elements: 19 00:05:39.633 list of free elements. size: 16.857605 MiB 00:05:39.633 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:39.633 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:39.633 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:39.633 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:39.633 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:39.633 element at address: 0x200019200000 with size: 0.999329 MiB 00:05:39.633 element at address: 0x200000400000 with size: 0.998108 MiB 00:05:39.633 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:39.633 element at address: 0x200018a00000 with size: 0.959900 MiB 00:05:39.633 element at address: 0x200019500040 with size: 0.937256 MiB 00:05:39.633 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:39.633 element at address: 0x20001ac00000 with size: 0.583191 MiB 00:05:39.633 element at address: 0x200000c00000 with size: 0.495300 MiB 00:05:39.633 element at address: 0x200018e00000 with size: 0.491150 MiB 00:05:39.633 element at address: 0x200019600000 with size: 0.485657 MiB 00:05:39.633 element at address: 0x200012c00000 with size: 0.446167 MiB 00:05:39.633 element at address: 0x200028000000 with size: 0.411072 MiB 00:05:39.633 element at address: 0x200000800000 with size: 0.355286 MiB 00:05:39.633 element at address: 0x20000a5ff040 with size: 0.001038 MiB 00:05:39.633 list of standard malloc elements. size: 199.221497 MiB 00:05:39.633 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:39.633 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:39.633 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:39.633 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:39.633 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:39.633 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:39.633 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:39.633 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:39.633 element at address: 0x200012bff040 with size: 0.000427 MiB 00:05:39.633 element at address: 0x200012bffa00 with size: 0.000366 MiB 00:05:39.633 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:39.633 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:39.633 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:39.633 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:39.633 element at address: 0x2000004ffa40 with size: 0.000244 MiB 00:05:39.633 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:39.633 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:39.633 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:39.633 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000a5ff480 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000a5ff580 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000a5ff680 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000a5ff780 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000a5ff880 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000a5ff980 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:39.633 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bff200 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bff300 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bff400 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bff500 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bff600 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bff700 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bff800 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bff900 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:39.633 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:39.633 list of memzone associated elements. size: 599.920898 MiB 00:05:39.633 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:39.633 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:39.633 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:39.633 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:39.633 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:39.633 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3131730_0 00:05:39.633 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:39.633 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3131730_0 00:05:39.633 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:39.633 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3131730_0 00:05:39.633 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:39.633 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:39.633 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:39.633 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:39.633 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:39.633 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3131730_0 00:05:39.633 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:39.633 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3131730 00:05:39.633 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:39.633 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3131730 00:05:39.633 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:39.634 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:39.634 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:39.634 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:39.634 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:39.634 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:39.634 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:39.634 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:39.634 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:39.634 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3131730 00:05:39.634 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:39.634 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3131730 00:05:39.634 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:39.634 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3131730 00:05:39.634 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:39.634 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3131730 00:05:39.634 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:39.634 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3131730 00:05:39.634 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:39.634 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3131730 00:05:39.634 element at address: 0x200018e7dbc0 with size: 0.500549 MiB 00:05:39.634 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:39.634 element at address: 0x200012c72380 with size: 0.500549 MiB 00:05:39.634 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:39.634 element at address: 0x20001967c540 with size: 0.250549 MiB 00:05:39.634 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:39.634 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:39.634 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3131730 00:05:39.634 element at address: 0x20000085f180 with size: 0.125549 MiB 00:05:39.634 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3131730 00:05:39.634 element at address: 0x200018af5bc0 with size: 0.031799 MiB 00:05:39.634 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:39.634 element at address: 0x2000280693c0 with size: 0.023804 MiB 00:05:39.634 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:39.634 element at address: 0x20000085af40 with size: 0.016174 MiB 00:05:39.634 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3131730 00:05:39.634 element at address: 0x20002806f540 with size: 0.002502 MiB 00:05:39.634 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:39.634 element at address: 0x2000004ffb40 with size: 0.000366 MiB 00:05:39.634 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3131730 00:05:39.634 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:39.634 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3131730 00:05:39.634 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:39.634 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3131730 00:05:39.634 element at address: 0x20000a5ffa80 with size: 0.000366 MiB 00:05:39.634 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:39.634 15:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:39.634 15:42:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3131730 00:05:39.634 15:42:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3131730 ']' 00:05:39.634 15:42:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3131730 00:05:39.634 15:42:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:39.634 15:42:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.634 15:42:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131730 00:05:39.634 15:42:25 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.634 15:42:25 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.634 15:42:25 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131730' 00:05:39.634 killing process with pid 3131730 00:05:39.634 15:42:25 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3131730 00:05:39.634 15:42:25 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3131730 00:05:42.167 00:05:42.167 real 0m3.506s 00:05:42.167 user 0m3.391s 00:05:42.167 sys 0m0.624s 00:05:42.167 15:42:27 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.167 15:42:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:42.167 ************************************ 00:05:42.167 END TEST dpdk_mem_utility 00:05:42.167 ************************************ 00:05:42.167 15:42:27 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:42.167 15:42:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.167 15:42:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.167 15:42:27 -- common/autotest_common.sh@10 -- # set +x 00:05:42.167 ************************************ 00:05:42.167 START TEST event 00:05:42.167 ************************************ 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:05:42.167 * Looking for test storage... 00:05:42.167 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:42.167 15:42:27 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.167 15:42:27 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.167 15:42:27 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.167 15:42:27 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.167 15:42:27 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.167 15:42:27 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.167 15:42:27 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.167 15:42:27 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.167 15:42:27 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.167 15:42:27 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.167 15:42:27 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.167 15:42:27 event -- scripts/common.sh@344 -- # case "$op" in 00:05:42.167 15:42:27 event -- scripts/common.sh@345 -- # : 1 00:05:42.167 15:42:27 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.167 15:42:27 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.167 15:42:27 event -- scripts/common.sh@365 -- # decimal 1 00:05:42.167 15:42:27 event -- scripts/common.sh@353 -- # local d=1 00:05:42.167 15:42:27 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.167 15:42:27 event -- scripts/common.sh@355 -- # echo 1 00:05:42.167 15:42:27 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.167 15:42:27 event -- scripts/common.sh@366 -- # decimal 2 00:05:42.167 15:42:27 event -- scripts/common.sh@353 -- # local d=2 00:05:42.167 15:42:27 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.167 15:42:27 event -- scripts/common.sh@355 -- # echo 2 00:05:42.167 15:42:27 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.167 15:42:27 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.167 15:42:27 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.167 15:42:27 event -- scripts/common.sh@368 -- # return 0 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:42.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.167 --rc genhtml_branch_coverage=1 00:05:42.167 --rc genhtml_function_coverage=1 00:05:42.167 --rc genhtml_legend=1 00:05:42.167 --rc geninfo_all_blocks=1 00:05:42.167 --rc geninfo_unexecuted_blocks=1 00:05:42.167 00:05:42.167 ' 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:42.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.167 --rc genhtml_branch_coverage=1 00:05:42.167 --rc genhtml_function_coverage=1 00:05:42.167 --rc genhtml_legend=1 00:05:42.167 --rc geninfo_all_blocks=1 00:05:42.167 --rc geninfo_unexecuted_blocks=1 00:05:42.167 00:05:42.167 ' 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:42.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.167 --rc genhtml_branch_coverage=1 00:05:42.167 --rc genhtml_function_coverage=1 00:05:42.167 --rc genhtml_legend=1 00:05:42.167 --rc geninfo_all_blocks=1 00:05:42.167 --rc geninfo_unexecuted_blocks=1 00:05:42.167 00:05:42.167 ' 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:42.167 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.167 --rc genhtml_branch_coverage=1 00:05:42.167 --rc genhtml_function_coverage=1 00:05:42.167 --rc genhtml_legend=1 00:05:42.167 --rc geninfo_all_blocks=1 00:05:42.167 --rc geninfo_unexecuted_blocks=1 00:05:42.167 00:05:42.167 ' 00:05:42.167 15:42:27 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:42.167 15:42:27 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:42.167 15:42:27 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:42.167 15:42:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.167 15:42:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.167 ************************************ 00:05:42.167 START TEST event_perf 00:05:42.167 ************************************ 00:05:42.167 15:42:27 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:42.167 Running I/O for 1 seconds...[2024-11-17 15:42:27.496607] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:42.167 [2024-11-17 15:42:27.496683] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132341 ] 00:05:42.167 [2024-11-17 15:42:27.625088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:42.426 [2024-11-17 15:42:27.730398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.426 [2024-11-17 15:42:27.730475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.426 [2024-11-17 15:42:27.730530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.426 [2024-11-17 15:42:27.730553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:43.812 Running I/O for 1 seconds... 00:05:43.812 lcore 0: 216116 00:05:43.812 lcore 1: 216117 00:05:43.812 lcore 2: 216120 00:05:43.812 lcore 3: 216118 00:05:43.812 done. 00:05:43.812 00:05:43.812 real 0m1.495s 00:05:43.812 user 0m4.340s 00:05:43.812 sys 0m0.151s 00:05:43.812 15:42:28 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.812 15:42:28 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.812 ************************************ 00:05:43.812 END TEST event_perf 00:05:43.812 ************************************ 00:05:43.812 15:42:28 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:43.812 15:42:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:43.812 15:42:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.812 15:42:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:43.812 ************************************ 00:05:43.812 START TEST event_reactor 00:05:43.812 ************************************ 00:05:43.812 15:42:29 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:43.812 [2024-11-17 15:42:29.075972] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:43.812 [2024-11-17 15:42:29.076052] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132630 ] 00:05:43.812 [2024-11-17 15:42:29.205931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.812 [2024-11-17 15:42:29.301509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.195 test_start 00:05:45.195 oneshot 00:05:45.195 tick 100 00:05:45.195 tick 100 00:05:45.195 tick 250 00:05:45.195 tick 100 00:05:45.195 tick 100 00:05:45.195 tick 100 00:05:45.195 tick 250 00:05:45.195 tick 500 00:05:45.195 tick 100 00:05:45.195 tick 100 00:05:45.195 tick 250 00:05:45.195 tick 100 00:05:45.195 tick 100 00:05:45.195 test_end 00:05:45.195 00:05:45.195 real 0m1.487s 00:05:45.195 user 0m1.339s 00:05:45.195 sys 0m0.142s 00:05:45.195 15:42:30 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.195 15:42:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:45.195 ************************************ 00:05:45.195 END TEST event_reactor 00:05:45.195 ************************************ 00:05:45.195 15:42:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.195 15:42:30 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:45.195 15:42:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.195 15:42:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.195 ************************************ 00:05:45.195 START TEST event_reactor_perf 00:05:45.195 ************************************ 00:05:45.195 15:42:30 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:45.195 [2024-11-17 15:42:30.651718] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:45.195 [2024-11-17 15:42:30.651804] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132924 ] 00:05:45.455 [2024-11-17 15:42:30.785082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.455 [2024-11-17 15:42:30.879417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.833 test_start 00:05:46.833 test_end 00:05:46.833 Performance: 409907 events per second 00:05:46.833 00:05:46.833 real 0m1.478s 00:05:46.833 user 0m1.330s 00:05:46.833 sys 0m0.142s 00:05:46.833 15:42:32 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.833 15:42:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.833 ************************************ 00:05:46.833 END TEST event_reactor_perf 00:05:46.833 ************************************ 00:05:46.833 15:42:32 event -- event/event.sh@49 -- # uname -s 00:05:46.833 15:42:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:46.833 15:42:32 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:46.833 15:42:32 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.833 15:42:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.833 15:42:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.833 ************************************ 00:05:46.833 START TEST event_scheduler 00:05:46.833 ************************************ 00:05:46.833 15:42:32 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:46.833 * Looking for test storage... 00:05:46.833 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:05:46.833 15:42:32 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:46.833 15:42:32 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:46.833 15:42:32 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:46.833 15:42:32 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:46.833 15:42:32 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.834 15:42:32 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.834 15:42:32 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.834 15:42:32 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:46.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.834 --rc genhtml_branch_coverage=1 00:05:46.834 --rc genhtml_function_coverage=1 00:05:46.834 --rc genhtml_legend=1 00:05:46.834 --rc geninfo_all_blocks=1 00:05:46.834 --rc geninfo_unexecuted_blocks=1 00:05:46.834 00:05:46.834 ' 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:46.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.834 --rc genhtml_branch_coverage=1 00:05:46.834 --rc genhtml_function_coverage=1 00:05:46.834 --rc genhtml_legend=1 00:05:46.834 --rc geninfo_all_blocks=1 00:05:46.834 --rc geninfo_unexecuted_blocks=1 00:05:46.834 00:05:46.834 ' 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:46.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.834 --rc genhtml_branch_coverage=1 00:05:46.834 --rc genhtml_function_coverage=1 00:05:46.834 --rc genhtml_legend=1 00:05:46.834 --rc geninfo_all_blocks=1 00:05:46.834 --rc geninfo_unexecuted_blocks=1 00:05:46.834 00:05:46.834 ' 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:46.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.834 --rc genhtml_branch_coverage=1 00:05:46.834 --rc genhtml_function_coverage=1 00:05:46.834 --rc genhtml_legend=1 00:05:46.834 --rc geninfo_all_blocks=1 00:05:46.834 --rc geninfo_unexecuted_blocks=1 00:05:46.834 00:05:46.834 ' 00:05:46.834 15:42:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:46.834 15:42:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3133317 00:05:46.834 15:42:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.834 15:42:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:46.834 15:42:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3133317 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3133317 ']' 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.834 15:42:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.093 [2024-11-17 15:42:32.447653] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:47.093 [2024-11-17 15:42:32.447747] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3133317 ] 00:05:47.094 [2024-11-17 15:42:32.578991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.353 [2024-11-17 15:42:32.682993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.353 [2024-11-17 15:42:32.683063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.353 [2024-11-17 15:42:32.683115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.353 [2024-11-17 15:42:32.683125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.920 15:42:33 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.921 15:42:33 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:47.921 15:42:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:47.921 15:42:33 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.921 15:42:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.921 [2024-11-17 15:42:33.257627] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:47.921 [2024-11-17 15:42:33.257662] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:47.921 [2024-11-17 15:42:33.257684] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:47.921 [2024-11-17 15:42:33.257696] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:47.921 [2024-11-17 15:42:33.257708] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:47.921 15:42:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.921 15:42:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:47.921 15:42:33 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.921 15:42:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 [2024-11-17 15:42:33.530368] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:48.180 15:42:33 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.180 15:42:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:48.180 15:42:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.180 15:42:33 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.180 15:42:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 ************************************ 00:05:48.180 START TEST scheduler_create_thread 00:05:48.180 ************************************ 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 2 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 3 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 4 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.180 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 5 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.181 6 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.181 7 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.181 8 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.181 9 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.181 10 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.181 15:42:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:48.749 15:42:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.749 15:42:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:48.749 15:42:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.749 15:42:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.128 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.128 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:50.128 15:42:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:50.128 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.128 15:42:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.507 15:42:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.507 00:05:51.507 real 0m3.107s 00:05:51.507 user 0m0.025s 00:05:51.507 sys 0m0.007s 00:05:51.507 15:42:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.507 15:42:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.507 ************************************ 00:05:51.507 END TEST scheduler_create_thread 00:05:51.507 ************************************ 00:05:51.507 15:42:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:51.507 15:42:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3133317 00:05:51.507 15:42:36 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3133317 ']' 00:05:51.507 15:42:36 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3133317 00:05:51.507 15:42:36 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:51.507 15:42:36 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.507 15:42:36 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3133317 00:05:51.507 15:42:36 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:51.507 15:42:36 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:51.507 15:42:36 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3133317' 00:05:51.507 killing process with pid 3133317 00:05:51.507 15:42:36 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3133317 00:05:51.507 15:42:36 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3133317 00:05:51.766 [2024-11-17 15:42:37.059321] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:52.706 00:05:52.706 real 0m5.985s 00:05:52.706 user 0m12.165s 00:05:52.707 sys 0m0.622s 00:05:52.707 15:42:38 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.707 15:42:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.707 ************************************ 00:05:52.707 END TEST event_scheduler 00:05:52.707 ************************************ 00:05:52.707 15:42:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:52.707 15:42:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:52.707 15:42:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.707 15:42:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.707 15:42:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.707 ************************************ 00:05:52.707 START TEST app_repeat 00:05:52.707 ************************************ 00:05:52.707 15:42:38 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3134362 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3134362' 00:05:52.707 Process app_repeat pid: 3134362 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:52.707 spdk_app_start Round 0 00:05:52.707 15:42:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3134362 /var/tmp/spdk-nbd.sock 00:05:52.707 15:42:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3134362 ']' 00:05:52.707 15:42:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.707 15:42:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.707 15:42:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.707 15:42:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.707 15:42:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:52.967 [2024-11-17 15:42:38.295743] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:52.967 [2024-11-17 15:42:38.295835] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134362 ] 00:05:52.967 [2024-11-17 15:42:38.429740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.226 [2024-11-17 15:42:38.529307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.226 [2024-11-17 15:42:38.529320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.799 15:42:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.800 15:42:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.800 15:42:39 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.063 Malloc0 00:05:54.063 15:42:39 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.063 Malloc1 00:05:54.322 15:42:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.322 /dev/nbd0 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.322 1+0 records in 00:05:54.322 1+0 records out 00:05:54.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228806 s, 17.9 MB/s 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.322 15:42:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.322 15:42:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.582 /dev/nbd1 00:05:54.582 15:42:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.582 15:42:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.582 1+0 records in 00:05:54.582 1+0 records out 00:05:54.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275523 s, 14.9 MB/s 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.582 15:42:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.582 15:42:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.582 15:42:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.582 15:42:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.582 15:42:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.582 15:42:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.842 { 00:05:54.842 "nbd_device": "/dev/nbd0", 00:05:54.842 "bdev_name": "Malloc0" 00:05:54.842 }, 00:05:54.842 { 00:05:54.842 "nbd_device": "/dev/nbd1", 00:05:54.842 "bdev_name": "Malloc1" 00:05:54.842 } 00:05:54.842 ]' 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.842 { 00:05:54.842 "nbd_device": "/dev/nbd0", 00:05:54.842 "bdev_name": "Malloc0" 00:05:54.842 }, 00:05:54.842 { 00:05:54.842 "nbd_device": "/dev/nbd1", 00:05:54.842 "bdev_name": "Malloc1" 00:05:54.842 } 00:05:54.842 ]' 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.842 /dev/nbd1' 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.842 /dev/nbd1' 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.842 256+0 records in 00:05:54.842 256+0 records out 00:05:54.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105999 s, 98.9 MB/s 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.842 256+0 records in 00:05:54.842 256+0 records out 00:05:54.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149321 s, 70.2 MB/s 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.842 15:42:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.102 256+0 records in 00:05:55.102 256+0 records out 00:05:55.102 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182081 s, 57.6 MB/s 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.102 15:42:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.361 15:42:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.620 15:42:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.620 15:42:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.189 15:42:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.128 [2024-11-17 15:42:42.566314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.128 [2024-11-17 15:42:42.660530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.128 [2024-11-17 15:42:42.660530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.387 [2024-11-17 15:42:42.834108] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.387 [2024-11-17 15:42:42.834172] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.293 15:42:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.293 15:42:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.293 spdk_app_start Round 1 00:05:59.293 15:42:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3134362 /var/tmp/spdk-nbd.sock 00:05:59.293 15:42:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3134362 ']' 00:05:59.293 15:42:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.293 15:42:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.293 15:42:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.293 15:42:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.293 15:42:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.293 15:42:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.293 15:42:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.293 15:42:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.553 Malloc0 00:05:59.553 15:42:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.813 Malloc1 00:05:59.813 15:42:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.813 /dev/nbd0 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.813 15:42:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.813 15:42:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:59.813 15:42:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.813 15:42:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.813 15:42:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.813 15:42:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:59.813 15:42:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.813 15:42:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.813 15:42:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.813 15:42:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.813 1+0 records in 00:05:59.813 1+0 records out 00:05:59.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259573 s, 15.8 MB/s 00:05:59.813 15:42:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:00.072 15:42:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.072 15:42:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:00.072 15:42:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.073 15:42:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.073 15:42:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.073 15:42:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.073 /dev/nbd1 00:06:00.073 15:42:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.073 15:42:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.073 1+0 records in 00:06:00.073 1+0 records out 00:06:00.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271497 s, 15.1 MB/s 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.073 15:42:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.073 15:42:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.073 15:42:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.073 15:42:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.073 15:42:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.073 15:42:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.378 15:42:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.378 { 00:06:00.378 "nbd_device": "/dev/nbd0", 00:06:00.378 "bdev_name": "Malloc0" 00:06:00.378 }, 00:06:00.378 { 00:06:00.378 "nbd_device": "/dev/nbd1", 00:06:00.378 "bdev_name": "Malloc1" 00:06:00.378 } 00:06:00.378 ]' 00:06:00.378 15:42:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.378 { 00:06:00.379 "nbd_device": "/dev/nbd0", 00:06:00.379 "bdev_name": "Malloc0" 00:06:00.379 }, 00:06:00.379 { 00:06:00.379 "nbd_device": "/dev/nbd1", 00:06:00.379 "bdev_name": "Malloc1" 00:06:00.379 } 00:06:00.379 ]' 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.379 /dev/nbd1' 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.379 /dev/nbd1' 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.379 256+0 records in 00:06:00.379 256+0 records out 00:06:00.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115599 s, 90.7 MB/s 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.379 256+0 records in 00:06:00.379 256+0 records out 00:06:00.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214731 s, 48.8 MB/s 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.379 15:42:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.675 256+0 records in 00:06:00.675 256+0 records out 00:06:00.675 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187052 s, 56.1 MB/s 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.675 15:42:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.675 15:42:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.675 15:42:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.675 15:42:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.675 15:42:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.675 15:42:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.675 15:42:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.675 15:42:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.675 15:42:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.675 15:42:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.675 15:42:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.934 15:42:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.193 15:42:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.193 15:42:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.452 15:42:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.831 [2024-11-17 15:42:48.069100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.831 [2024-11-17 15:42:48.164267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.831 [2024-11-17 15:42:48.164274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.831 [2024-11-17 15:42:48.335335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.831 [2024-11-17 15:42:48.335399] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.735 15:42:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.735 15:42:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.735 spdk_app_start Round 2 00:06:04.735 15:42:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3134362 /var/tmp/spdk-nbd.sock 00:06:04.735 15:42:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3134362 ']' 00:06:04.735 15:42:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.735 15:42:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.735 15:42:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.735 15:42:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.735 15:42:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.735 15:42:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.735 15:42:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.735 15:42:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.994 Malloc0 00:06:04.994 15:42:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.253 Malloc1 00:06:05.253 15:42:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.253 15:42:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.511 /dev/nbd0 00:06:05.511 15:42:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.511 15:42:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.511 1+0 records in 00:06:05.511 1+0 records out 00:06:05.511 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224105 s, 18.3 MB/s 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.511 15:42:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.511 15:42:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.511 15:42:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.511 15:42:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.511 /dev/nbd1 00:06:05.769 15:42:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.769 15:42:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.769 15:42:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.769 15:42:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.769 15:42:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.769 15:42:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.769 15:42:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.769 15:42:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.769 15:42:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.769 15:42:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.769 15:42:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.769 1+0 records in 00:06:05.769 1+0 records out 00:06:05.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259338 s, 15.8 MB/s 00:06:05.770 15:42:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.770 15:42:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.770 15:42:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:05.770 15:42:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.770 15:42:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.770 15:42:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.770 15:42:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.770 15:42:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.770 15:42:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.770 15:42:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.770 15:42:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.770 { 00:06:05.770 "nbd_device": "/dev/nbd0", 00:06:05.770 "bdev_name": "Malloc0" 00:06:05.770 }, 00:06:05.770 { 00:06:05.770 "nbd_device": "/dev/nbd1", 00:06:05.770 "bdev_name": "Malloc1" 00:06:05.770 } 00:06:05.770 ]' 00:06:05.770 15:42:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.770 { 00:06:05.770 "nbd_device": "/dev/nbd0", 00:06:05.770 "bdev_name": "Malloc0" 00:06:05.770 }, 00:06:05.770 { 00:06:05.770 "nbd_device": "/dev/nbd1", 00:06:05.770 "bdev_name": "Malloc1" 00:06:05.770 } 00:06:05.770 ]' 00:06:05.770 15:42:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.028 /dev/nbd1' 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.028 /dev/nbd1' 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.028 256+0 records in 00:06:06.028 256+0 records out 00:06:06.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106021 s, 98.9 MB/s 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.028 256+0 records in 00:06:06.028 256+0 records out 00:06:06.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02111 s, 49.7 MB/s 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.028 256+0 records in 00:06:06.028 256+0 records out 00:06:06.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178337 s, 58.8 MB/s 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.028 15:42:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.286 15:42:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.544 15:42:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.545 15:42:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.545 15:42:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.545 15:42:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.545 15:42:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.545 15:42:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.545 15:42:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.545 15:42:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.545 15:42:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.545 15:42:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.545 15:42:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.545 15:42:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.111 15:42:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.045 [2024-11-17 15:42:53.550830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.304 [2024-11-17 15:42:53.645475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.304 [2024-11-17 15:42:53.645475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.304 [2024-11-17 15:42:53.816107] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.304 [2024-11-17 15:42:53.816161] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.204 15:42:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3134362 /var/tmp/spdk-nbd.sock 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3134362 ']' 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.204 15:42:55 event.app_repeat -- event/event.sh@39 -- # killprocess 3134362 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3134362 ']' 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3134362 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3134362 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3134362' 00:06:10.204 killing process with pid 3134362 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3134362 00:06:10.204 15:42:55 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3134362 00:06:11.138 spdk_app_start is called in Round 0. 00:06:11.138 Shutdown signal received, stop current app iteration 00:06:11.138 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:11.138 spdk_app_start is called in Round 1. 00:06:11.138 Shutdown signal received, stop current app iteration 00:06:11.138 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:11.138 spdk_app_start is called in Round 2. 00:06:11.138 Shutdown signal received, stop current app iteration 00:06:11.138 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:11.138 spdk_app_start is called in Round 3. 00:06:11.139 Shutdown signal received, stop current app iteration 00:06:11.139 15:42:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:11.139 15:42:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:11.139 00:06:11.139 real 0m18.421s 00:06:11.139 user 0m38.483s 00:06:11.139 sys 0m3.081s 00:06:11.139 15:42:56 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.139 15:42:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.139 ************************************ 00:06:11.139 END TEST app_repeat 00:06:11.139 ************************************ 00:06:11.397 15:42:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:11.397 15:42:56 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:11.397 15:42:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.397 15:42:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.397 15:42:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.397 ************************************ 00:06:11.397 START TEST cpu_locks 00:06:11.397 ************************************ 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:11.397 * Looking for test storage... 00:06:11.397 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.397 15:42:56 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:11.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.397 --rc genhtml_branch_coverage=1 00:06:11.397 --rc genhtml_function_coverage=1 00:06:11.397 --rc genhtml_legend=1 00:06:11.397 --rc geninfo_all_blocks=1 00:06:11.397 --rc geninfo_unexecuted_blocks=1 00:06:11.397 00:06:11.397 ' 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:11.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.397 --rc genhtml_branch_coverage=1 00:06:11.397 --rc genhtml_function_coverage=1 00:06:11.397 --rc genhtml_legend=1 00:06:11.397 --rc geninfo_all_blocks=1 00:06:11.397 --rc geninfo_unexecuted_blocks=1 00:06:11.397 00:06:11.397 ' 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:11.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.397 --rc genhtml_branch_coverage=1 00:06:11.397 --rc genhtml_function_coverage=1 00:06:11.397 --rc genhtml_legend=1 00:06:11.397 --rc geninfo_all_blocks=1 00:06:11.397 --rc geninfo_unexecuted_blocks=1 00:06:11.397 00:06:11.397 ' 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:11.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.397 --rc genhtml_branch_coverage=1 00:06:11.397 --rc genhtml_function_coverage=1 00:06:11.397 --rc genhtml_legend=1 00:06:11.397 --rc geninfo_all_blocks=1 00:06:11.397 --rc geninfo_unexecuted_blocks=1 00:06:11.397 00:06:11.397 ' 00:06:11.397 15:42:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:11.397 15:42:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:11.397 15:42:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:11.397 15:42:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.397 15:42:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.656 ************************************ 00:06:11.656 START TEST default_locks 00:06:11.656 ************************************ 00:06:11.656 15:42:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:11.656 15:42:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3137880 00:06:11.656 15:42:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3137880 00:06:11.656 15:42:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.656 15:42:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3137880 ']' 00:06:11.656 15:42:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.656 15:42:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.656 15:42:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.656 15:42:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.656 15:42:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.656 [2024-11-17 15:42:57.051620] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:11.656 [2024-11-17 15:42:57.051712] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3137880 ] 00:06:11.656 [2024-11-17 15:42:57.180958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.915 [2024-11-17 15:42:57.277811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.481 15:42:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.481 15:42:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:12.481 15:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3137880 00:06:12.481 15:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3137880 00:06:12.481 15:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.049 lslocks: write error 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3137880 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3137880 ']' 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3137880 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3137880 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3137880' 00:06:13.049 killing process with pid 3137880 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3137880 00:06:13.049 15:42:58 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3137880 00:06:15.582 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3137880 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3137880 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3137880 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3137880 ']' 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.583 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3137880) - No such process 00:06:15.583 ERROR: process (pid: 3137880) is no longer running 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.583 00:06:15.583 real 0m3.653s 00:06:15.583 user 0m3.608s 00:06:15.583 sys 0m0.694s 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.583 15:43:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.583 ************************************ 00:06:15.583 END TEST default_locks 00:06:15.583 ************************************ 00:06:15.583 15:43:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:15.583 15:43:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.583 15:43:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.583 15:43:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.583 ************************************ 00:06:15.583 START TEST default_locks_via_rpc 00:06:15.583 ************************************ 00:06:15.583 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:15.583 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3138619 00:06:15.583 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3138619 00:06:15.583 15:43:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.583 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3138619 ']' 00:06:15.583 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.583 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.583 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.583 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.583 15:43:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.583 [2024-11-17 15:43:00.777119] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:15.583 [2024-11-17 15:43:00.777221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3138619 ] 00:06:15.583 [2024-11-17 15:43:00.909234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.583 [2024-11-17 15:43:01.010215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3138619 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3138619 00:06:16.518 15:43:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3138619 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3138619 ']' 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3138619 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3138619 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3138619' 00:06:16.776 killing process with pid 3138619 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3138619 00:06:16.776 15:43:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3138619 00:06:19.306 00:06:19.306 real 0m3.777s 00:06:19.306 user 0m3.742s 00:06:19.306 sys 0m0.756s 00:06:19.306 15:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.306 15:43:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 ************************************ 00:06:19.306 END TEST default_locks_via_rpc 00:06:19.306 ************************************ 00:06:19.306 15:43:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:19.306 15:43:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.306 15:43:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.306 15:43:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 ************************************ 00:06:19.306 START TEST non_locking_app_on_locked_coremask 00:06:19.306 ************************************ 00:06:19.306 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:19.306 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3139230 00:06:19.306 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3139230 /var/tmp/spdk.sock 00:06:19.306 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.306 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3139230 ']' 00:06:19.306 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.306 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.306 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.306 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.306 15:43:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.306 [2024-11-17 15:43:04.634205] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:19.306 [2024-11-17 15:43:04.634300] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139230 ] 00:06:19.306 [2024-11-17 15:43:04.764423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.565 [2024-11-17 15:43:04.862608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.130 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.130 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.130 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3139461 00:06:20.131 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3139461 /var/tmp/spdk2.sock 00:06:20.131 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:20.131 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3139461 ']' 00:06:20.131 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.131 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.131 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.131 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.131 15:43:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.389 [2024-11-17 15:43:05.683519] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:20.389 [2024-11-17 15:43:05.683611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3139461 ] 00:06:20.389 [2024-11-17 15:43:05.860921] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:20.389 [2024-11-17 15:43:05.860967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.648 [2024-11-17 15:43:06.058537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.034 15:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.034 15:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.034 15:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3139230 00:06:22.034 15:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.034 15:43:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3139230 00:06:22.976 lslocks: write error 00:06:22.976 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3139230 00:06:22.976 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3139230 ']' 00:06:22.976 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3139230 00:06:22.976 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.977 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.977 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3139230 00:06:22.977 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.977 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.977 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3139230' 00:06:22.977 killing process with pid 3139230 00:06:22.977 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3139230 00:06:22.977 15:43:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3139230 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3139461 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3139461 ']' 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3139461 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3139461 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3139461' 00:06:27.162 killing process with pid 3139461 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3139461 00:06:27.162 15:43:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3139461 00:06:29.693 00:06:29.693 real 0m10.225s 00:06:29.693 user 0m10.319s 00:06:29.693 sys 0m1.434s 00:06:29.693 15:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.693 15:43:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 ************************************ 00:06:29.693 END TEST non_locking_app_on_locked_coremask 00:06:29.693 ************************************ 00:06:29.693 15:43:14 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:29.693 15:43:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.693 15:43:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.693 15:43:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 ************************************ 00:06:29.693 START TEST locking_app_on_unlocked_coremask 00:06:29.693 ************************************ 00:06:29.693 15:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:29.693 15:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3141113 00:06:29.693 15:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3141113 /var/tmp/spdk.sock 00:06:29.693 15:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:29.693 15:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3141113 ']' 00:06:29.693 15:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.693 15:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.693 15:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.693 15:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.693 15:43:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.693 [2024-11-17 15:43:14.947636] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:29.693 [2024-11-17 15:43:14.947748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3141113 ] 00:06:29.693 [2024-11-17 15:43:15.076627] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.693 [2024-11-17 15:43:15.076667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.693 [2024-11-17 15:43:15.171119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3141377 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3141377 /var/tmp/spdk2.sock 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3141377 ']' 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.629 15:43:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.629 [2024-11-17 15:43:15.995810] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:30.629 [2024-11-17 15:43:15.995924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3141377 ] 00:06:30.887 [2024-11-17 15:43:16.175651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.887 [2024-11-17 15:43:16.366419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.308 15:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.308 15:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:32.308 15:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3141377 00:06:32.308 15:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3141377 00:06:32.308 15:43:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.683 lslocks: write error 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3141113 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3141113 ']' 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3141113 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3141113 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3141113' 00:06:33.683 killing process with pid 3141113 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3141113 00:06:33.683 15:43:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3141113 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3141377 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3141377 ']' 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3141377 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3141377 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3141377' 00:06:37.871 killing process with pid 3141377 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3141377 00:06:37.871 15:43:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3141377 00:06:40.470 00:06:40.470 real 0m10.578s 00:06:40.470 user 0m10.686s 00:06:40.470 sys 0m1.544s 00:06:40.470 15:43:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.470 15:43:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.470 ************************************ 00:06:40.470 END TEST locking_app_on_unlocked_coremask 00:06:40.470 ************************************ 00:06:40.470 15:43:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:40.470 15:43:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.470 15:43:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.470 15:43:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.470 ************************************ 00:06:40.470 START TEST locking_app_on_locked_coremask 00:06:40.470 ************************************ 00:06:40.470 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:40.470 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3143013 00:06:40.470 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3143013 /var/tmp/spdk.sock 00:06:40.470 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.470 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3143013 ']' 00:06:40.471 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.471 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.471 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.471 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.471 15:43:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.471 [2024-11-17 15:43:25.606522] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:40.471 [2024-11-17 15:43:25.606628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143013 ] 00:06:40.471 [2024-11-17 15:43:25.739822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.471 [2024-11-17 15:43:25.840155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3143261 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3143261 /var/tmp/spdk2.sock 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3143261 /var/tmp/spdk2.sock 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3143261 /var/tmp/spdk2.sock 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3143261 ']' 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.038 15:43:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.300 [2024-11-17 15:43:26.633707] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:41.300 [2024-11-17 15:43:26.633807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143261 ] 00:06:41.300 [2024-11-17 15:43:26.812315] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3143013 has claimed it. 00:06:41.300 [2024-11-17 15:43:26.812374] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:41.868 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3143261) - No such process 00:06:41.868 ERROR: process (pid: 3143261) is no longer running 00:06:41.868 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.868 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:41.868 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:41.868 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.868 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.868 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.868 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3143013 00:06:41.868 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3143013 00:06:41.868 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.436 lslocks: write error 00:06:42.436 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3143013 00:06:42.436 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3143013 ']' 00:06:42.436 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3143013 00:06:42.436 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:42.436 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.436 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143013 00:06:42.695 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.695 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.695 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143013' 00:06:42.695 killing process with pid 3143013 00:06:42.695 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3143013 00:06:42.695 15:43:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3143013 00:06:45.228 00:06:45.228 real 0m4.647s 00:06:45.228 user 0m4.780s 00:06:45.228 sys 0m1.083s 00:06:45.228 15:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.228 15:43:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.228 ************************************ 00:06:45.228 END TEST locking_app_on_locked_coremask 00:06:45.228 ************************************ 00:06:45.228 15:43:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:45.228 15:43:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.228 15:43:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.228 15:43:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.228 ************************************ 00:06:45.228 START TEST locking_overlapped_coremask 00:06:45.228 ************************************ 00:06:45.228 15:43:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:45.228 15:43:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3143854 00:06:45.228 15:43:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3143854 /var/tmp/spdk.sock 00:06:45.228 15:43:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:45.228 15:43:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3143854 ']' 00:06:45.228 15:43:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.228 15:43:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.228 15:43:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.228 15:43:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.228 15:43:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.228 [2024-11-17 15:43:30.343037] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:45.228 [2024-11-17 15:43:30.343136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3143854 ] 00:06:45.228 [2024-11-17 15:43:30.477466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.228 [2024-11-17 15:43:30.582297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.228 [2024-11-17 15:43:30.582308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.228 [2024-11-17 15:43:30.582311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3144120 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3144120 /var/tmp/spdk2.sock 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3144120 /var/tmp/spdk2.sock 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3144120 /var/tmp/spdk2.sock 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3144120 ']' 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.796 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.797 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.797 15:43:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.055 [2024-11-17 15:43:31.424955] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:46.055 [2024-11-17 15:43:31.425066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144120 ] 00:06:46.315 [2024-11-17 15:43:31.611207] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3143854 has claimed it. 00:06:46.315 [2024-11-17 15:43:31.611262] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.574 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3144120) - No such process 00:06:46.574 ERROR: process (pid: 3144120) is no longer running 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3143854 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3143854 ']' 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3143854 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143854 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143854' 00:06:46.574 killing process with pid 3143854 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3143854 00:06:46.574 15:43:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3143854 00:06:49.111 00:06:49.111 real 0m4.098s 00:06:49.111 user 0m11.141s 00:06:49.111 sys 0m0.725s 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.111 ************************************ 00:06:49.111 END TEST locking_overlapped_coremask 00:06:49.111 ************************************ 00:06:49.111 15:43:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:49.111 15:43:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.111 15:43:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.111 15:43:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.111 ************************************ 00:06:49.111 START TEST locking_overlapped_coremask_via_rpc 00:06:49.111 ************************************ 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3144684 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3144684 /var/tmp/spdk.sock 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3144684 ']' 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.111 15:43:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.111 [2024-11-17 15:43:34.525267] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:49.111 [2024-11-17 15:43:34.525363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144684 ] 00:06:49.370 [2024-11-17 15:43:34.657588] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.370 [2024-11-17 15:43:34.657627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.370 [2024-11-17 15:43:34.760931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.370 [2024-11-17 15:43:34.760997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.371 [2024-11-17 15:43:34.761006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3144825 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3144825 /var/tmp/spdk2.sock 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3144825 ']' 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.308 15:43:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.308 [2024-11-17 15:43:35.582251] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:50.309 [2024-11-17 15:43:35.582355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3144825 ] 00:06:50.309 [2024-11-17 15:43:35.769572] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.309 [2024-11-17 15:43:35.769623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.567 [2024-11-17 15:43:35.984638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.567 [2024-11-17 15:43:35.984750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.567 [2024-11-17 15:43:35.988576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.104 [2024-11-17 15:43:38.099691] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3144684 has claimed it. 00:06:53.104 request: 00:06:53.104 { 00:06:53.104 "method": "framework_enable_cpumask_locks", 00:06:53.104 "req_id": 1 00:06:53.104 } 00:06:53.104 Got JSON-RPC error response 00:06:53.104 response: 00:06:53.104 { 00:06:53.104 "code": -32603, 00:06:53.104 "message": "Failed to claim CPU core: 2" 00:06:53.104 } 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3144684 /var/tmp/spdk.sock 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3144684 ']' 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3144825 /var/tmp/spdk2.sock 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3144825 ']' 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.104 00:06:53.104 real 0m4.080s 00:06:53.104 user 0m1.081s 00:06:53.104 sys 0m0.247s 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.104 15:43:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.104 ************************************ 00:06:53.104 END TEST locking_overlapped_coremask_via_rpc 00:06:53.104 ************************************ 00:06:53.104 15:43:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.104 15:43:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3144684 ]] 00:06:53.104 15:43:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3144684 00:06:53.104 15:43:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3144684 ']' 00:06:53.104 15:43:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3144684 00:06:53.104 15:43:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:53.104 15:43:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.104 15:43:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3144684 00:06:53.104 15:43:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.104 15:43:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.104 15:43:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3144684' 00:06:53.104 killing process with pid 3144684 00:06:53.104 15:43:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3144684 00:06:53.104 15:43:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3144684 00:06:55.639 15:43:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3144825 ]] 00:06:55.639 15:43:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3144825 00:06:55.639 15:43:40 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3144825 ']' 00:06:55.639 15:43:40 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3144825 00:06:55.639 15:43:40 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:55.639 15:43:40 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.639 15:43:40 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3144825 00:06:55.639 15:43:40 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:55.639 15:43:40 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:55.639 15:43:40 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3144825' 00:06:55.639 killing process with pid 3144825 00:06:55.639 15:43:40 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3144825 00:06:55.639 15:43:40 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3144825 00:06:58.175 15:43:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.175 15:43:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.175 15:43:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3144684 ]] 00:06:58.175 15:43:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3144684 00:06:58.175 15:43:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3144684 ']' 00:06:58.175 15:43:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3144684 00:06:58.175 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3144684) - No such process 00:06:58.175 15:43:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3144684 is not found' 00:06:58.175 Process with pid 3144684 is not found 00:06:58.175 15:43:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3144825 ]] 00:06:58.175 15:43:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3144825 00:06:58.175 15:43:43 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3144825 ']' 00:06:58.175 15:43:43 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3144825 00:06:58.175 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3144825) - No such process 00:06:58.175 15:43:43 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3144825 is not found' 00:06:58.175 Process with pid 3144825 is not found 00:06:58.175 15:43:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.175 00:06:58.175 real 0m46.534s 00:06:58.175 user 1m19.895s 00:06:58.175 sys 0m7.872s 00:06:58.175 15:43:43 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.175 15:43:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.175 ************************************ 00:06:58.175 END TEST cpu_locks 00:06:58.175 ************************************ 00:06:58.175 00:06:58.175 real 1m16.092s 00:06:58.175 user 2m17.835s 00:06:58.175 sys 0m12.471s 00:06:58.175 15:43:43 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.175 15:43:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.175 ************************************ 00:06:58.175 END TEST event 00:06:58.175 ************************************ 00:06:58.175 15:43:43 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:58.175 15:43:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.175 15:43:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.175 15:43:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.175 ************************************ 00:06:58.175 START TEST thread 00:06:58.175 ************************************ 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:58.175 * Looking for test storage... 00:06:58.175 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.175 15:43:43 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.175 15:43:43 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.175 15:43:43 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.175 15:43:43 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.175 15:43:43 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.175 15:43:43 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.175 15:43:43 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.175 15:43:43 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.175 15:43:43 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.175 15:43:43 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.175 15:43:43 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.175 15:43:43 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:58.175 15:43:43 thread -- scripts/common.sh@345 -- # : 1 00:06:58.175 15:43:43 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.175 15:43:43 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.175 15:43:43 thread -- scripts/common.sh@365 -- # decimal 1 00:06:58.175 15:43:43 thread -- scripts/common.sh@353 -- # local d=1 00:06:58.175 15:43:43 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.175 15:43:43 thread -- scripts/common.sh@355 -- # echo 1 00:06:58.175 15:43:43 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.175 15:43:43 thread -- scripts/common.sh@366 -- # decimal 2 00:06:58.175 15:43:43 thread -- scripts/common.sh@353 -- # local d=2 00:06:58.175 15:43:43 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.175 15:43:43 thread -- scripts/common.sh@355 -- # echo 2 00:06:58.175 15:43:43 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.175 15:43:43 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.175 15:43:43 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.175 15:43:43 thread -- scripts/common.sh@368 -- # return 0 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.175 --rc genhtml_branch_coverage=1 00:06:58.175 --rc genhtml_function_coverage=1 00:06:58.175 --rc genhtml_legend=1 00:06:58.175 --rc geninfo_all_blocks=1 00:06:58.175 --rc geninfo_unexecuted_blocks=1 00:06:58.175 00:06:58.175 ' 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.175 --rc genhtml_branch_coverage=1 00:06:58.175 --rc genhtml_function_coverage=1 00:06:58.175 --rc genhtml_legend=1 00:06:58.175 --rc geninfo_all_blocks=1 00:06:58.175 --rc geninfo_unexecuted_blocks=1 00:06:58.175 00:06:58.175 ' 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.175 --rc genhtml_branch_coverage=1 00:06:58.175 --rc genhtml_function_coverage=1 00:06:58.175 --rc genhtml_legend=1 00:06:58.175 --rc geninfo_all_blocks=1 00:06:58.175 --rc geninfo_unexecuted_blocks=1 00:06:58.175 00:06:58.175 ' 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.175 --rc genhtml_branch_coverage=1 00:06:58.175 --rc genhtml_function_coverage=1 00:06:58.175 --rc genhtml_legend=1 00:06:58.175 --rc geninfo_all_blocks=1 00:06:58.175 --rc geninfo_unexecuted_blocks=1 00:06:58.175 00:06:58.175 ' 00:06:58.175 15:43:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.175 15:43:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.175 ************************************ 00:06:58.175 START TEST thread_poller_perf 00:06:58.175 ************************************ 00:06:58.175 15:43:43 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.175 [2024-11-17 15:43:43.646484] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:58.175 [2024-11-17 15:43:43.646562] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3146399 ] 00:06:58.435 [2024-11-17 15:43:43.773791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.435 [2024-11-17 15:43:43.868693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.435 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:59.812 [2024-11-17T14:43:45.356Z] ====================================== 00:06:59.812 [2024-11-17T14:43:45.356Z] busy:2508258958 (cyc) 00:06:59.812 [2024-11-17T14:43:45.356Z] total_run_count: 422000 00:06:59.812 [2024-11-17T14:43:45.356Z] tsc_hz: 2500000000 (cyc) 00:06:59.812 [2024-11-17T14:43:45.356Z] ====================================== 00:06:59.812 [2024-11-17T14:43:45.356Z] poller_cost: 5943 (cyc), 2377 (nsec) 00:06:59.812 00:06:59.812 real 0m1.461s 00:06:59.812 user 0m1.332s 00:06:59.812 sys 0m0.124s 00:06:59.812 15:43:45 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.812 15:43:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.812 ************************************ 00:06:59.812 END TEST thread_poller_perf 00:06:59.812 ************************************ 00:06:59.812 15:43:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.812 15:43:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:59.812 15:43:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.812 15:43:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.812 ************************************ 00:06:59.812 START TEST thread_poller_perf 00:06:59.812 ************************************ 00:06:59.812 15:43:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:59.812 [2024-11-17 15:43:45.185772] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:59.812 [2024-11-17 15:43:45.185849] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3146684 ] 00:06:59.812 [2024-11-17 15:43:45.312812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.071 [2024-11-17 15:43:45.410582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.071 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.449 [2024-11-17T14:43:46.993Z] ====================================== 00:07:01.449 [2024-11-17T14:43:46.993Z] busy:2503109644 (cyc) 00:07:01.449 [2024-11-17T14:43:46.993Z] total_run_count: 5530000 00:07:01.449 [2024-11-17T14:43:46.993Z] tsc_hz: 2500000000 (cyc) 00:07:01.449 [2024-11-17T14:43:46.993Z] ====================================== 00:07:01.449 [2024-11-17T14:43:46.993Z] poller_cost: 452 (cyc), 180 (nsec) 00:07:01.449 00:07:01.449 real 0m1.467s 00:07:01.449 user 0m1.334s 00:07:01.449 sys 0m0.127s 00:07:01.449 15:43:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.450 15:43:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 ************************************ 00:07:01.450 END TEST thread_poller_perf 00:07:01.450 ************************************ 00:07:01.450 15:43:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.450 00:07:01.450 real 0m3.269s 00:07:01.450 user 0m2.835s 00:07:01.450 sys 0m0.443s 00:07:01.450 15:43:46 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.450 15:43:46 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 ************************************ 00:07:01.450 END TEST thread 00:07:01.450 ************************************ 00:07:01.450 15:43:46 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:01.450 15:43:46 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.450 15:43:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.450 15:43:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.450 15:43:46 -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 ************************************ 00:07:01.450 START TEST app_cmdline 00:07:01.450 ************************************ 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.450 * Looking for test storage... 00:07:01.450 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.450 15:43:46 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.450 --rc genhtml_branch_coverage=1 00:07:01.450 --rc genhtml_function_coverage=1 00:07:01.450 --rc genhtml_legend=1 00:07:01.450 --rc geninfo_all_blocks=1 00:07:01.450 --rc geninfo_unexecuted_blocks=1 00:07:01.450 00:07:01.450 ' 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.450 --rc genhtml_branch_coverage=1 00:07:01.450 --rc genhtml_function_coverage=1 00:07:01.450 --rc genhtml_legend=1 00:07:01.450 --rc geninfo_all_blocks=1 00:07:01.450 --rc geninfo_unexecuted_blocks=1 00:07:01.450 00:07:01.450 ' 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.450 --rc genhtml_branch_coverage=1 00:07:01.450 --rc genhtml_function_coverage=1 00:07:01.450 --rc genhtml_legend=1 00:07:01.450 --rc geninfo_all_blocks=1 00:07:01.450 --rc geninfo_unexecuted_blocks=1 00:07:01.450 00:07:01.450 ' 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.450 --rc genhtml_branch_coverage=1 00:07:01.450 --rc genhtml_function_coverage=1 00:07:01.450 --rc genhtml_legend=1 00:07:01.450 --rc geninfo_all_blocks=1 00:07:01.450 --rc geninfo_unexecuted_blocks=1 00:07:01.450 00:07:01.450 ' 00:07:01.450 15:43:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.450 15:43:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3147023 00:07:01.450 15:43:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3147023 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3147023 ']' 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.450 15:43:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.450 15:43:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.709 [2024-11-17 15:43:47.028869] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:01.709 [2024-11-17 15:43:47.028960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3147023 ] 00:07:01.709 [2024-11-17 15:43:47.159303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.969 [2024-11-17 15:43:47.253939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.536 15:43:47 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.536 15:43:47 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:02.536 15:43:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:02.795 { 00:07:02.795 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:07:02.795 "fields": { 00:07:02.795 "major": 25, 00:07:02.795 "minor": 1, 00:07:02.795 "patch": 0, 00:07:02.795 "suffix": "-pre", 00:07:02.795 "commit": "83e8405e4" 00:07:02.795 } 00:07:02.795 } 00:07:02.795 15:43:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.795 15:43:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.795 15:43:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.795 15:43:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.795 15:43:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.795 15:43:48 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.795 15:43:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.795 15:43:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.795 15:43:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.795 15:43:48 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.795 15:43:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.795 15:43:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.795 15:43:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:02.796 15:43:48 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:03.055 request: 00:07:03.055 { 00:07:03.055 "method": "env_dpdk_get_mem_stats", 00:07:03.055 "req_id": 1 00:07:03.055 } 00:07:03.055 Got JSON-RPC error response 00:07:03.055 response: 00:07:03.055 { 00:07:03.055 "code": -32601, 00:07:03.055 "message": "Method not found" 00:07:03.055 } 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.055 15:43:48 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3147023 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3147023 ']' 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3147023 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3147023 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3147023' 00:07:03.055 killing process with pid 3147023 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@973 -- # kill 3147023 00:07:03.055 15:43:48 app_cmdline -- common/autotest_common.sh@978 -- # wait 3147023 00:07:05.663 00:07:05.663 real 0m3.925s 00:07:05.663 user 0m4.069s 00:07:05.663 sys 0m0.671s 00:07:05.663 15:43:50 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.663 15:43:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.663 ************************************ 00:07:05.663 END TEST app_cmdline 00:07:05.663 ************************************ 00:07:05.663 15:43:50 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:05.663 15:43:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.663 15:43:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.663 15:43:50 -- common/autotest_common.sh@10 -- # set +x 00:07:05.663 ************************************ 00:07:05.663 START TEST version 00:07:05.663 ************************************ 00:07:05.663 15:43:50 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:05.663 * Looking for test storage... 00:07:05.663 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:05.663 15:43:50 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.663 15:43:50 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.663 15:43:50 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.663 15:43:50 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.663 15:43:50 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.663 15:43:50 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.663 15:43:50 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.663 15:43:50 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.663 15:43:50 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.663 15:43:50 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.663 15:43:50 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.663 15:43:50 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.663 15:43:50 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.663 15:43:50 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.663 15:43:50 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.664 15:43:50 version -- scripts/common.sh@344 -- # case "$op" in 00:07:05.664 15:43:50 version -- scripts/common.sh@345 -- # : 1 00:07:05.664 15:43:50 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.664 15:43:50 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.664 15:43:50 version -- scripts/common.sh@365 -- # decimal 1 00:07:05.664 15:43:50 version -- scripts/common.sh@353 -- # local d=1 00:07:05.664 15:43:50 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.664 15:43:50 version -- scripts/common.sh@355 -- # echo 1 00:07:05.664 15:43:50 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.664 15:43:50 version -- scripts/common.sh@366 -- # decimal 2 00:07:05.664 15:43:50 version -- scripts/common.sh@353 -- # local d=2 00:07:05.664 15:43:50 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.664 15:43:50 version -- scripts/common.sh@355 -- # echo 2 00:07:05.664 15:43:50 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.664 15:43:50 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.664 15:43:50 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.664 15:43:50 version -- scripts/common.sh@368 -- # return 0 00:07:05.664 15:43:50 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.664 15:43:50 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.664 --rc genhtml_branch_coverage=1 00:07:05.664 --rc genhtml_function_coverage=1 00:07:05.664 --rc genhtml_legend=1 00:07:05.664 --rc geninfo_all_blocks=1 00:07:05.664 --rc geninfo_unexecuted_blocks=1 00:07:05.664 00:07:05.664 ' 00:07:05.664 15:43:50 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.664 --rc genhtml_branch_coverage=1 00:07:05.664 --rc genhtml_function_coverage=1 00:07:05.664 --rc genhtml_legend=1 00:07:05.664 --rc geninfo_all_blocks=1 00:07:05.664 --rc geninfo_unexecuted_blocks=1 00:07:05.664 00:07:05.664 ' 00:07:05.664 15:43:50 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.664 --rc genhtml_branch_coverage=1 00:07:05.664 --rc genhtml_function_coverage=1 00:07:05.664 --rc genhtml_legend=1 00:07:05.664 --rc geninfo_all_blocks=1 00:07:05.664 --rc geninfo_unexecuted_blocks=1 00:07:05.664 00:07:05.664 ' 00:07:05.664 15:43:50 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.664 --rc genhtml_branch_coverage=1 00:07:05.664 --rc genhtml_function_coverage=1 00:07:05.664 --rc genhtml_legend=1 00:07:05.664 --rc geninfo_all_blocks=1 00:07:05.664 --rc geninfo_unexecuted_blocks=1 00:07:05.664 00:07:05.664 ' 00:07:05.664 15:43:50 version -- app/version.sh@17 -- # get_header_version major 00:07:05.664 15:43:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:05.664 15:43:50 version -- app/version.sh@14 -- # cut -f2 00:07:05.664 15:43:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.664 15:43:50 version -- app/version.sh@17 -- # major=25 00:07:05.664 15:43:50 version -- app/version.sh@18 -- # get_header_version minor 00:07:05.664 15:43:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:05.664 15:43:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.664 15:43:50 version -- app/version.sh@14 -- # cut -f2 00:07:05.664 15:43:50 version -- app/version.sh@18 -- # minor=1 00:07:05.664 15:43:50 version -- app/version.sh@19 -- # get_header_version patch 00:07:05.664 15:43:50 version -- app/version.sh@14 -- # cut -f2 00:07:05.664 15:43:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:05.664 15:43:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.664 15:43:50 version -- app/version.sh@19 -- # patch=0 00:07:05.664 15:43:50 version -- app/version.sh@20 -- # get_header_version suffix 00:07:05.664 15:43:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:05.664 15:43:50 version -- app/version.sh@14 -- # cut -f2 00:07:05.664 15:43:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:05.664 15:43:50 version -- app/version.sh@20 -- # suffix=-pre 00:07:05.664 15:43:50 version -- app/version.sh@22 -- # version=25.1 00:07:05.664 15:43:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:05.664 15:43:50 version -- app/version.sh@28 -- # version=25.1rc0 00:07:05.664 15:43:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:05.664 15:43:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:05.664 15:43:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:05.664 15:43:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:05.664 00:07:05.664 real 0m0.252s 00:07:05.664 user 0m0.142s 00:07:05.664 sys 0m0.152s 00:07:05.664 15:43:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.664 15:43:51 version -- common/autotest_common.sh@10 -- # set +x 00:07:05.664 ************************************ 00:07:05.664 END TEST version 00:07:05.664 ************************************ 00:07:05.664 15:43:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:05.664 15:43:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:05.664 15:43:51 -- spdk/autotest.sh@194 -- # uname -s 00:07:05.664 15:43:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:05.664 15:43:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:05.664 15:43:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:05.664 15:43:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:05.664 15:43:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:05.664 15:43:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:05.664 15:43:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:05.664 15:43:51 -- common/autotest_common.sh@10 -- # set +x 00:07:05.664 15:43:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:05.664 15:43:51 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:05.664 15:43:51 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:05.664 15:43:51 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:05.664 15:43:51 -- spdk/autotest.sh@280 -- # '[' rdma = rdma ']' 00:07:05.664 15:43:51 -- spdk/autotest.sh@281 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:05.664 15:43:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.664 15:43:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.664 15:43:51 -- common/autotest_common.sh@10 -- # set +x 00:07:05.664 ************************************ 00:07:05.664 START TEST nvmf_rdma 00:07:05.664 ************************************ 00:07:05.664 15:43:51 nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:05.664 * Looking for test storage... 00:07:05.664 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:05.664 15:43:51 nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.664 15:43:51 nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.664 15:43:51 nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:05.924 15:43:51 nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.924 15:43:51 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:07:05.924 15:43:51 nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.924 15:43:51 nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:05.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.924 --rc genhtml_branch_coverage=1 00:07:05.924 --rc genhtml_function_coverage=1 00:07:05.924 --rc genhtml_legend=1 00:07:05.924 --rc geninfo_all_blocks=1 00:07:05.924 --rc geninfo_unexecuted_blocks=1 00:07:05.924 00:07:05.925 ' 00:07:05.925 15:43:51 nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:05.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.925 --rc genhtml_branch_coverage=1 00:07:05.925 --rc genhtml_function_coverage=1 00:07:05.925 --rc genhtml_legend=1 00:07:05.925 --rc geninfo_all_blocks=1 00:07:05.925 --rc geninfo_unexecuted_blocks=1 00:07:05.925 00:07:05.925 ' 00:07:05.925 15:43:51 nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:05.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.925 --rc genhtml_branch_coverage=1 00:07:05.925 --rc genhtml_function_coverage=1 00:07:05.925 --rc genhtml_legend=1 00:07:05.925 --rc geninfo_all_blocks=1 00:07:05.925 --rc geninfo_unexecuted_blocks=1 00:07:05.925 00:07:05.925 ' 00:07:05.925 15:43:51 nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:05.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.925 --rc genhtml_branch_coverage=1 00:07:05.925 --rc genhtml_function_coverage=1 00:07:05.925 --rc genhtml_legend=1 00:07:05.925 --rc geninfo_all_blocks=1 00:07:05.925 --rc geninfo_unexecuted_blocks=1 00:07:05.925 00:07:05.925 ' 00:07:05.925 15:43:51 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:05.925 15:43:51 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:05.925 15:43:51 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:05.925 15:43:51 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.925 15:43:51 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.925 15:43:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:05.925 ************************************ 00:07:05.925 START TEST nvmf_target_core 00:07:05.925 ************************************ 00:07:05.925 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:05.925 * Looking for test storage... 00:07:05.925 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:05.925 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:05.925 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:05.925 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.184 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.184 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.184 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.184 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.184 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.184 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.185 --rc genhtml_branch_coverage=1 00:07:06.185 --rc genhtml_function_coverage=1 00:07:06.185 --rc genhtml_legend=1 00:07:06.185 --rc geninfo_all_blocks=1 00:07:06.185 --rc geninfo_unexecuted_blocks=1 00:07:06.185 00:07:06.185 ' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.185 --rc genhtml_branch_coverage=1 00:07:06.185 --rc genhtml_function_coverage=1 00:07:06.185 --rc genhtml_legend=1 00:07:06.185 --rc geninfo_all_blocks=1 00:07:06.185 --rc geninfo_unexecuted_blocks=1 00:07:06.185 00:07:06.185 ' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.185 --rc genhtml_branch_coverage=1 00:07:06.185 --rc genhtml_function_coverage=1 00:07:06.185 --rc genhtml_legend=1 00:07:06.185 --rc geninfo_all_blocks=1 00:07:06.185 --rc geninfo_unexecuted_blocks=1 00:07:06.185 00:07:06.185 ' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.185 --rc genhtml_branch_coverage=1 00:07:06.185 --rc genhtml_function_coverage=1 00:07:06.185 --rc genhtml_legend=1 00:07:06.185 --rc geninfo_all_blocks=1 00:07:06.185 --rc geninfo_unexecuted_blocks=1 00:07:06.185 00:07:06.185 ' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.185 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.185 ************************************ 00:07:06.185 START TEST nvmf_abort 00:07:06.185 ************************************ 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:06.185 * Looking for test storage... 00:07:06.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.185 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.446 --rc genhtml_branch_coverage=1 00:07:06.446 --rc genhtml_function_coverage=1 00:07:06.446 --rc genhtml_legend=1 00:07:06.446 --rc geninfo_all_blocks=1 00:07:06.446 --rc geninfo_unexecuted_blocks=1 00:07:06.446 00:07:06.446 ' 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.446 --rc genhtml_branch_coverage=1 00:07:06.446 --rc genhtml_function_coverage=1 00:07:06.446 --rc genhtml_legend=1 00:07:06.446 --rc geninfo_all_blocks=1 00:07:06.446 --rc geninfo_unexecuted_blocks=1 00:07:06.446 00:07:06.446 ' 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.446 --rc genhtml_branch_coverage=1 00:07:06.446 --rc genhtml_function_coverage=1 00:07:06.446 --rc genhtml_legend=1 00:07:06.446 --rc geninfo_all_blocks=1 00:07:06.446 --rc geninfo_unexecuted_blocks=1 00:07:06.446 00:07:06.446 ' 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.446 --rc genhtml_branch_coverage=1 00:07:06.446 --rc genhtml_function_coverage=1 00:07:06.446 --rc genhtml_legend=1 00:07:06.446 --rc geninfo_all_blocks=1 00:07:06.446 --rc geninfo_unexecuted_blocks=1 00:07:06.446 00:07:06.446 ' 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:06.446 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.447 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:06.447 15:43:51 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:13.023 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:13.023 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:13.024 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:13.024 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:13.024 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # rdma_device_init 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:13.024 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:13.024 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:13.024 altname enp217s0f0np0 00:07:13.024 altname ens818f0np0 00:07:13.024 inet 192.168.100.8/24 scope global mlx_0_0 00:07:13.024 valid_lft forever preferred_lft forever 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:13.024 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:13.024 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:13.024 altname enp217s0f1np1 00:07:13.024 altname ens818f1np1 00:07:13.024 inet 192.168.100.9/24 scope global mlx_0_1 00:07:13.024 valid_lft forever preferred_lft forever 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:13.024 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:13.284 192.168.100.9' 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:13.284 192.168.100.9' 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # head -n 1 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:13.284 192.168.100.9' 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # tail -n +2 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # head -n 1 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3151246 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3151246 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3151246 ']' 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.284 15:43:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.284 [2024-11-17 15:43:58.751826] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:13.284 [2024-11-17 15:43:58.751938] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.544 [2024-11-17 15:43:58.889818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.544 [2024-11-17 15:43:58.992598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:13.544 [2024-11-17 15:43:58.992645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:13.544 [2024-11-17 15:43:58.992658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:13.544 [2024-11-17 15:43:58.992670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:13.544 [2024-11-17 15:43:58.992679] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:13.544 [2024-11-17 15:43:58.994901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.544 [2024-11-17 15:43:58.994964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.544 [2024-11-17 15:43:58.994972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:14.113 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.113 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:14.113 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:14.113 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.113 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.113 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.113 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:07:14.113 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.113 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.113 [2024-11-17 15:43:59.622737] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f1156531940) succeed. 00:07:14.113 [2024-11-17 15:43:59.640475] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f1155bbd940) succeed. 00:07:14.372 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.372 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:14.372 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.372 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 Malloc0 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 Delay0 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 [2024-11-17 15:43:59.971095] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.631 15:43:59 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:14.631 [2024-11-17 15:44:00.122847] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:17.177 Initializing NVMe Controllers 00:07:17.177 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:17.177 controller IO queue size 128 less than required 00:07:17.177 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:17.177 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:17.177 Initialization complete. Launching workers. 00:07:17.177 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37985 00:07:17.177 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38046, failed to submit 62 00:07:17.177 success 37988, unsuccessful 58, failed 0 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:17.177 rmmod nvme_rdma 00:07:17.177 rmmod nvme_fabrics 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3151246 ']' 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3151246 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3151246 ']' 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3151246 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3151246 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:17.177 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3151246' 00:07:17.177 killing process with pid 3151246 00:07:17.178 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3151246 00:07:17.178 15:44:02 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3151246 00:07:18.558 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:18.558 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:07:18.558 00:07:18.558 real 0m12.514s 00:07:18.558 user 0m18.521s 00:07:18.558 sys 0m5.989s 00:07:18.558 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.558 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.558 ************************************ 00:07:18.558 END TEST nvmf_abort 00:07:18.558 ************************************ 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.818 ************************************ 00:07:18.818 START TEST nvmf_ns_hotplug_stress 00:07:18.818 ************************************ 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:18.818 * Looking for test storage... 00:07:18.818 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.818 --rc genhtml_branch_coverage=1 00:07:18.818 --rc genhtml_function_coverage=1 00:07:18.818 --rc genhtml_legend=1 00:07:18.818 --rc geninfo_all_blocks=1 00:07:18.818 --rc geninfo_unexecuted_blocks=1 00:07:18.818 00:07:18.818 ' 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.818 --rc genhtml_branch_coverage=1 00:07:18.818 --rc genhtml_function_coverage=1 00:07:18.818 --rc genhtml_legend=1 00:07:18.818 --rc geninfo_all_blocks=1 00:07:18.818 --rc geninfo_unexecuted_blocks=1 00:07:18.818 00:07:18.818 ' 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.818 --rc genhtml_branch_coverage=1 00:07:18.818 --rc genhtml_function_coverage=1 00:07:18.818 --rc genhtml_legend=1 00:07:18.818 --rc geninfo_all_blocks=1 00:07:18.818 --rc geninfo_unexecuted_blocks=1 00:07:18.818 00:07:18.818 ' 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.818 --rc genhtml_branch_coverage=1 00:07:18.818 --rc genhtml_function_coverage=1 00:07:18.818 --rc genhtml_legend=1 00:07:18.818 --rc geninfo_all_blocks=1 00:07:18.818 --rc geninfo_unexecuted_blocks=1 00:07:18.818 00:07:18.818 ' 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.818 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.079 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:19.079 15:44:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.653 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:07:25.654 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:07:25.654 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:07:25.654 Found net devices under 0000:d9:00.0: mlx_0_0 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:07:25.654 Found net devices under 0000:d9:00.1: mlx_0_1 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:25.654 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:25.915 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:25.915 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:07:25.915 altname enp217s0f0np0 00:07:25.915 altname ens818f0np0 00:07:25.915 inet 192.168.100.8/24 scope global mlx_0_0 00:07:25.915 valid_lft forever preferred_lft forever 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:25.915 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:25.915 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:07:25.915 altname enp217s0f1np1 00:07:25.915 altname ens818f1np1 00:07:25.915 inet 192.168.100.9/24 scope global mlx_0_1 00:07:25.915 valid_lft forever preferred_lft forever 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:07:25.915 192.168.100.9' 00:07:25.915 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:07:25.916 192.168.100.9' 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # head -n 1 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:07:25.916 192.168.100.9' 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # head -n 1 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # tail -n +2 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3155684 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3155684 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3155684 ']' 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.916 15:44:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:26.176 [2024-11-17 15:44:11.507367] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:26.176 [2024-11-17 15:44:11.507468] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.176 [2024-11-17 15:44:11.641622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.436 [2024-11-17 15:44:11.742822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.436 [2024-11-17 15:44:11.742868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.436 [2024-11-17 15:44:11.742881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.436 [2024-11-17 15:44:11.742893] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.436 [2024-11-17 15:44:11.742902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.436 [2024-11-17 15:44:11.745247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.436 [2024-11-17 15:44:11.745311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.436 [2024-11-17 15:44:11.745318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.005 15:44:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.005 15:44:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:07:27.005 15:44:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:27.005 15:44:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:27.005 15:44:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:27.005 15:44:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.005 15:44:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:27.005 15:44:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:27.263 [2024-11-17 15:44:12.559535] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f637f3bd940) succeed. 00:07:27.264 [2024-11-17 15:44:12.568836] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f637f379940) succeed. 00:07:27.264 15:44:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:27.523 15:44:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:27.782 [2024-11-17 15:44:13.166730] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:27.782 15:44:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:28.041 15:44:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:28.041 Malloc0 00:07:28.300 15:44:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:28.300 Delay0 00:07:28.300 15:44:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.560 15:44:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:28.819 NULL1 00:07:28.819 15:44:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:29.078 15:44:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3156247 00:07:29.078 15:44:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:29.079 15:44:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:29.079 15:44:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.079 15:44:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.338 15:44:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:29.338 15:44:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:29.597 true 00:07:29.597 15:44:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:29.597 15:44:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.856 15:44:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.856 15:44:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:29.856 15:44:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:30.115 true 00:07:30.115 15:44:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:30.115 15:44:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.374 15:44:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.633 15:44:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:30.633 15:44:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:30.633 true 00:07:30.891 15:44:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:30.891 15:44:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.891 15:44:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.150 15:44:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:31.150 15:44:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:31.410 true 00:07:31.410 15:44:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:31.410 15:44:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.669 15:44:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.669 15:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:31.669 15:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:31.928 true 00:07:31.928 15:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:31.929 15:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.188 15:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.447 15:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:32.447 15:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:32.447 true 00:07:32.447 15:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:32.447 15:44:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.706 15:44:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.965 15:44:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:32.965 15:44:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:33.224 true 00:07:33.224 15:44:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:33.224 15:44:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.224 15:44:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.483 15:44:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:33.483 15:44:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:33.742 true 00:07:33.742 15:44:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:33.742 15:44:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.001 15:44:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.001 15:44:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:34.001 15:44:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:34.259 true 00:07:34.259 15:44:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:34.260 15:44:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.518 15:44:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.778 15:44:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:34.778 15:44:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:34.778 true 00:07:35.037 15:44:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:35.037 15:44:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.037 15:44:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.296 15:44:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:35.296 15:44:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:35.555 true 00:07:35.555 15:44:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:35.555 15:44:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.814 15:44:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.814 15:44:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:35.814 15:44:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:36.073 true 00:07:36.073 15:44:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:36.073 15:44:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.332 15:44:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.591 15:44:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:36.591 15:44:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:36.591 true 00:07:36.591 15:44:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:36.591 15:44:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.851 15:44:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.114 15:44:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:37.114 15:44:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:37.114 true 00:07:37.504 15:44:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:37.504 15:44:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.504 15:44:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.762 15:44:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:37.762 15:44:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:37.762 true 00:07:37.762 15:44:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:37.762 15:44:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.021 15:44:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.280 15:44:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:38.280 15:44:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:38.540 true 00:07:38.540 15:44:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:38.540 15:44:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.540 15:44:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.800 15:44:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:38.800 15:44:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:39.059 true 00:07:39.059 15:44:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:39.059 15:44:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.319 15:44:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.319 15:44:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:39.319 15:44:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:39.578 true 00:07:39.578 15:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:39.578 15:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.838 15:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.098 15:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:40.098 15:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:40.098 true 00:07:40.098 15:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:40.098 15:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.357 15:44:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.616 15:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:40.616 15:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:40.876 true 00:07:40.876 15:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:40.876 15:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.135 15:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.135 15:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:41.135 15:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:41.394 true 00:07:41.395 15:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:41.395 15:44:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.654 15:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.913 15:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:41.913 15:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:41.913 true 00:07:41.913 15:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:41.913 15:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.173 15:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.432 15:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:42.432 15:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:42.432 true 00:07:42.692 15:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:42.692 15:44:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.692 15:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.952 15:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:42.952 15:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:43.211 true 00:07:43.211 15:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:43.211 15:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.471 15:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.471 15:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:43.471 15:44:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:43.734 true 00:07:43.734 15:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:43.734 15:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.994 15:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.254 15:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:44.254 15:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:44.254 true 00:07:44.254 15:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:44.254 15:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.513 15:44:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.772 15:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:44.772 15:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:45.032 true 00:07:45.032 15:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:45.032 15:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.032 15:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.291 15:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:45.291 15:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:45.550 true 00:07:45.550 15:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:45.550 15:44:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.809 15:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.809 15:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:45.809 15:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:46.068 true 00:07:46.068 15:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:46.068 15:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.327 15:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.586 15:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:46.586 15:44:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:46.586 true 00:07:46.586 15:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:46.586 15:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.845 15:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.105 15:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:47.105 15:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:47.364 true 00:07:47.364 15:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:47.364 15:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.624 15:44:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.624 15:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:47.624 15:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:47.883 true 00:07:47.883 15:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:47.883 15:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.143 15:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.143 15:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:48.143 15:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:48.402 true 00:07:48.402 15:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:48.402 15:44:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.662 15:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.921 15:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:48.921 15:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:49.180 true 00:07:49.180 15:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:49.180 15:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.180 15:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.440 15:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:49.440 15:44:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:49.699 true 00:07:49.699 15:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:49.699 15:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.958 15:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.958 15:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:49.958 15:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:50.218 true 00:07:50.218 15:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:50.218 15:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.477 15:44:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.737 15:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:50.737 15:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:50.737 true 00:07:50.737 15:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:50.737 15:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.997 15:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.257 15:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:51.257 15:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:51.566 true 00:07:51.566 15:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:51.566 15:44:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.566 15:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.826 15:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:51.826 15:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:52.086 true 00:07:52.086 15:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:52.086 15:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.346 15:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.346 15:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:52.346 15:44:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:52.605 true 00:07:52.605 15:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:52.605 15:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.863 15:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.122 15:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:53.122 15:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:53.122 true 00:07:53.122 15:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:53.122 15:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.381 15:44:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.640 15:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:53.640 15:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:53.898 true 00:07:53.898 15:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:53.898 15:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.898 15:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.157 15:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:54.157 15:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:54.416 true 00:07:54.416 15:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:54.416 15:44:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.676 15:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.935 15:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:54.935 15:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:54.935 true 00:07:54.935 15:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:54.935 15:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.194 15:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.453 15:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:55.453 15:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:55.453 true 00:07:55.453 15:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:55.453 15:44:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.712 15:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.971 15:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:55.971 15:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:56.230 true 00:07:56.230 15:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:56.230 15:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.489 15:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.489 15:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:56.489 15:44:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:56.748 true 00:07:56.748 15:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:56.748 15:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.007 15:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.266 15:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:57.266 15:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:57.266 true 00:07:57.266 15:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:57.266 15:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.525 15:44:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.784 15:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:57.784 15:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:58.044 true 00:07:58.044 15:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:58.044 15:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.044 15:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.303 15:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:58.303 15:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:58.562 true 00:07:58.562 15:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:58.562 15:44:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.821 15:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.821 15:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:58.821 15:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:59.079 true 00:07:59.079 15:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:59.079 15:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.338 15:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.597 15:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:59.597 15:44:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:59.597 true 00:07:59.597 15:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:07:59.597 15:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.856 15:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.118 15:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:08:00.118 15:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:08:00.415 Initializing NVMe Controllers 00:08:00.415 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:00.415 Controller IO queue size 128, less than required. 00:08:00.415 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:00.415 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:00.415 Initialization complete. Launching workers. 00:08:00.415 ======================================================== 00:08:00.415 Latency(us) 00:08:00.415 Device Information : IOPS MiB/s Average min max 00:08:00.415 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35948.80 17.55 3560.42 1660.46 4796.56 00:08:00.415 ======================================================== 00:08:00.415 Total : 35948.80 17.55 3560.42 1660.46 4796.56 00:08:00.415 00:08:00.415 true 00:08:00.415 15:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:08:00.415 15:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.415 15:44:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.701 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:08:00.701 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:08:00.960 true 00:08:00.960 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3156247 00:08:00.960 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3156247) - No such process 00:08:00.960 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3156247 00:08:00.960 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.219 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.219 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:01.219 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:01.219 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:01.219 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.220 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:01.479 null0 00:08:01.479 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.479 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.479 15:44:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:01.738 null1 00:08:01.738 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.738 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.738 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:01.738 null2 00:08:01.998 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.998 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.998 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:01.998 null3 00:08:01.998 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.998 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.998 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:02.257 null4 00:08:02.257 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.257 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.257 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:02.516 null5 00:08:02.516 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.516 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.516 15:44:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:02.516 null6 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:02.776 null7 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3162271 3162272 3162274 3162276 3162278 3162280 3162282 3162283 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.776 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.036 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.036 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.036 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.036 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.036 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.036 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.036 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.036 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.295 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.295 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.295 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.295 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.295 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.296 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.555 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.555 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.556 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.556 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.556 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.556 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.556 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.556 15:44:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.821 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.079 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.338 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.338 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.338 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.338 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.338 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.338 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.338 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.338 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.598 15:44:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.598 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.598 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.858 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.117 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.117 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.117 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.117 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.117 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.118 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.118 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.118 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.376 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.377 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.377 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.377 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.377 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.377 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.377 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.635 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.635 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.635 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.635 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.635 15:44:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.635 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.635 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.635 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.635 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.635 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.635 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.635 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.635 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.635 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.895 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.155 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.156 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.156 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.156 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.156 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.156 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.156 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.156 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.416 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.675 15:44:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.675 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.934 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.193 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.193 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.193 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:07.193 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:07.193 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:07.193 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:07.193 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:07.193 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:07.193 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:07.193 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:07.194 rmmod nvme_rdma 00:08:07.194 rmmod nvme_fabrics 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3155684 ']' 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3155684 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3155684 ']' 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3155684 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3155684 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3155684' 00:08:07.194 killing process with pid 3155684 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3155684 00:08:07.194 15:44:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3155684 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:09.100 00:08:09.100 real 0m50.009s 00:08:09.100 user 3m34.072s 00:08:09.100 sys 0m16.855s 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:09.100 ************************************ 00:08:09.100 END TEST nvmf_ns_hotplug_stress 00:08:09.100 ************************************ 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.100 ************************************ 00:08:09.100 START TEST nvmf_delete_subsystem 00:08:09.100 ************************************ 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:09.100 * Looking for test storage... 00:08:09.100 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.100 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:09.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.101 --rc genhtml_branch_coverage=1 00:08:09.101 --rc genhtml_function_coverage=1 00:08:09.101 --rc genhtml_legend=1 00:08:09.101 --rc geninfo_all_blocks=1 00:08:09.101 --rc geninfo_unexecuted_blocks=1 00:08:09.101 00:08:09.101 ' 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:09.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.101 --rc genhtml_branch_coverage=1 00:08:09.101 --rc genhtml_function_coverage=1 00:08:09.101 --rc genhtml_legend=1 00:08:09.101 --rc geninfo_all_blocks=1 00:08:09.101 --rc geninfo_unexecuted_blocks=1 00:08:09.101 00:08:09.101 ' 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:09.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.101 --rc genhtml_branch_coverage=1 00:08:09.101 --rc genhtml_function_coverage=1 00:08:09.101 --rc genhtml_legend=1 00:08:09.101 --rc geninfo_all_blocks=1 00:08:09.101 --rc geninfo_unexecuted_blocks=1 00:08:09.101 00:08:09.101 ' 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:09.101 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.101 --rc genhtml_branch_coverage=1 00:08:09.101 --rc genhtml_function_coverage=1 00:08:09.101 --rc genhtml_legend=1 00:08:09.101 --rc geninfo_all_blocks=1 00:08:09.101 --rc geninfo_unexecuted_blocks=1 00:08:09.101 00:08:09.101 ' 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.101 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.101 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.102 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.102 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.102 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.102 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.102 15:44:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:15.669 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:15.669 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:15.669 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:15.669 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # rdma_device_init 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:15.669 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:15.669 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:15.669 altname enp217s0f0np0 00:08:15.669 altname ens818f0np0 00:08:15.669 inet 192.168.100.8/24 scope global mlx_0_0 00:08:15.669 valid_lft forever preferred_lft forever 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:15.669 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:15.669 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:15.669 altname enp217s0f1np1 00:08:15.669 altname ens818f1np1 00:08:15.669 inet 192.168.100.9/24 scope global mlx_0_1 00:08:15.669 valid_lft forever preferred_lft forever 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:15.669 15:45:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:15.669 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:15.669 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:15.670 192.168.100.9' 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:15.670 192.168.100.9' 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # head -n 1 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:15.670 192.168.100.9' 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # tail -n +2 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # head -n 1 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3166925 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3166925 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3166925 ']' 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.670 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.670 [2024-11-17 15:45:01.185998] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:15.670 [2024-11-17 15:45:01.186108] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.929 [2024-11-17 15:45:01.321527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:15.929 [2024-11-17 15:45:01.423881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.929 [2024-11-17 15:45:01.423929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.929 [2024-11-17 15:45:01.423943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.929 [2024-11-17 15:45:01.423957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.929 [2024-11-17 15:45:01.423967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.929 [2024-11-17 15:45:01.426057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.929 [2024-11-17 15:45:01.426063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.497 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.497 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:16.497 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.497 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.497 15:45:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.497 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.497 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:16.497 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.497 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.757 [2024-11-17 15:45:02.051070] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f8f78ba4940) succeed. 00:08:16.757 [2024-11-17 15:45:02.060196] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f8f78b60940) succeed. 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.757 [2024-11-17 15:45:02.221746] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.757 NULL1 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.757 Delay0 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3167093 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:16.757 15:45:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:17.016 [2024-11-17 15:45:02.383876] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:18.922 15:45:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:18.922 15:45:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.922 15:45:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.301 NVMe io qpair process completion error 00:08:20.301 NVMe io qpair process completion error 00:08:20.301 NVMe io qpair process completion error 00:08:20.301 NVMe io qpair process completion error 00:08:20.301 NVMe io qpair process completion error 00:08:20.301 NVMe io qpair process completion error 00:08:20.301 15:45:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.301 15:45:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:20.301 15:45:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3167093 00:08:20.301 15:45:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:20.561 15:45:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:20.561 15:45:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3167093 00:08:20.561 15:45:05 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:21.130 Write completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Write completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Write completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Write completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Write completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.130 Read completed with error (sct=0, sc=8) 00:08:21.130 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Read completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.131 starting I/O failed: -6 00:08:21.131 Write completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 starting I/O failed: -6 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Write completed with error (sct=0, sc=8) 00:08:21.132 Read completed with error (sct=0, sc=8) 00:08:21.132 Initializing NVMe Controllers 00:08:21.132 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:21.132 Controller IO queue size 128, less than required. 00:08:21.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:21.132 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:21.132 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:21.132 Initialization complete. Launching workers. 00:08:21.132 ======================================================== 00:08:21.132 Latency(us) 00:08:21.132 Device Information : IOPS MiB/s Average min max 00:08:21.133 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.45 0.04 1594136.47 1000101.94 2976996.53 00:08:21.133 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.45 0.04 1596400.70 1000927.62 2979532.65 00:08:21.133 ======================================================== 00:08:21.133 Total : 160.91 0.08 1595268.59 1000101.94 2979532.65 00:08:21.133 00:08:21.133 15:45:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:21.133 [2024-11-17 15:45:06.497873] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:08:21.133 15:45:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3167093 00:08:21.133 15:45:06 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:21.133 [2024-11-17 15:45:06.523916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:08:21.133 [2024-11-17 15:45:06.523948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:08:21.133 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3167093 00:08:21.702 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3167093) - No such process 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3167093 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3167093 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3167093 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.702 [2024-11-17 15:45:07.022778] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3168377 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:21.702 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:21.702 [2024-11-17 15:45:07.159963] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:22.271 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.271 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:22.271 15:45:07 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:22.530 15:45:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:22.530 15:45:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:22.530 15:45:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.099 15:45:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.099 15:45:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:23.099 15:45:08 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:23.667 15:45:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:23.667 15:45:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:23.667 15:45:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.235 15:45:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.235 15:45:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:24.235 15:45:09 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:24.803 15:45:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:24.803 15:45:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:24.803 15:45:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.063 15:45:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.063 15:45:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:25.063 15:45:10 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:25.632 15:45:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:25.632 15:45:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:25.632 15:45:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.200 15:45:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.200 15:45:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:26.200 15:45:11 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:26.769 15:45:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:26.769 15:45:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:26.769 15:45:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.337 15:45:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.337 15:45:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:27.337 15:45:12 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:27.596 15:45:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:27.596 15:45:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:27.596 15:45:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.165 15:45:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.165 15:45:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:28.165 15:45:13 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.745 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.745 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:28.745 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.003 Initializing NVMe Controllers 00:08:29.003 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:29.003 Controller IO queue size 128, less than required. 00:08:29.003 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:29.003 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:29.003 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:29.003 Initialization complete. Launching workers. 00:08:29.003 ======================================================== 00:08:29.003 Latency(us) 00:08:29.003 Device Information : IOPS MiB/s Average min max 00:08:29.003 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001582.55 1000068.90 1004577.19 00:08:29.003 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002921.59 1000081.29 1007356.74 00:08:29.003 ======================================================== 00:08:29.003 Total : 256.00 0.12 1002252.07 1000068.90 1007356.74 00:08:29.003 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3168377 00:08:29.320 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3168377) - No such process 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3168377 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:29.320 rmmod nvme_rdma 00:08:29.320 rmmod nvme_fabrics 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3166925 ']' 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3166925 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3166925 ']' 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3166925 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3166925 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3166925' 00:08:29.320 killing process with pid 3166925 00:08:29.320 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3166925 00:08:29.321 15:45:14 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3166925 00:08:30.698 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:30.698 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:30.698 00:08:30.698 real 0m21.826s 00:08:30.698 user 0m52.216s 00:08:30.698 sys 0m6.416s 00:08:30.698 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.698 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:30.698 ************************************ 00:08:30.698 END TEST nvmf_delete_subsystem 00:08:30.698 ************************************ 00:08:30.698 15:45:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:30.698 15:45:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.698 15:45:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.698 15:45:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:30.698 ************************************ 00:08:30.698 START TEST nvmf_host_management 00:08:30.698 ************************************ 00:08:30.698 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:30.960 * Looking for test storage... 00:08:30.960 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.960 --rc genhtml_branch_coverage=1 00:08:30.960 --rc genhtml_function_coverage=1 00:08:30.960 --rc genhtml_legend=1 00:08:30.960 --rc geninfo_all_blocks=1 00:08:30.960 --rc geninfo_unexecuted_blocks=1 00:08:30.960 00:08:30.960 ' 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.960 --rc genhtml_branch_coverage=1 00:08:30.960 --rc genhtml_function_coverage=1 00:08:30.960 --rc genhtml_legend=1 00:08:30.960 --rc geninfo_all_blocks=1 00:08:30.960 --rc geninfo_unexecuted_blocks=1 00:08:30.960 00:08:30.960 ' 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.960 --rc genhtml_branch_coverage=1 00:08:30.960 --rc genhtml_function_coverage=1 00:08:30.960 --rc genhtml_legend=1 00:08:30.960 --rc geninfo_all_blocks=1 00:08:30.960 --rc geninfo_unexecuted_blocks=1 00:08:30.960 00:08:30.960 ' 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.960 --rc genhtml_branch_coverage=1 00:08:30.960 --rc genhtml_function_coverage=1 00:08:30.960 --rc genhtml_legend=1 00:08:30.960 --rc geninfo_all_blocks=1 00:08:30.960 --rc geninfo_unexecuted_blocks=1 00:08:30.960 00:08:30.960 ' 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.960 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:30.961 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:30.961 15:45:16 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:37.533 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:37.533 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:37.533 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:37.533 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:37.533 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # rdma_device_init 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:37.534 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:37.534 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:37.534 altname enp217s0f0np0 00:08:37.534 altname ens818f0np0 00:08:37.534 inet 192.168.100.8/24 scope global mlx_0_0 00:08:37.534 valid_lft forever preferred_lft forever 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:37.534 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:37.534 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:37.534 altname enp217s0f1np1 00:08:37.534 altname ens818f1np1 00:08:37.534 inet 192.168.100.9/24 scope global mlx_0_1 00:08:37.534 valid_lft forever preferred_lft forever 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:37.534 15:45:22 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:37.534 192.168.100.9' 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:37.534 192.168.100.9' 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # head -n 1 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:37.534 192.168.100.9' 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # tail -n +2 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # head -n 1 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:37.534 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3173350 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3173350 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3173350 ']' 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.535 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:37.795 [2024-11-17 15:45:23.145581] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:37.795 [2024-11-17 15:45:23.145701] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.795 [2024-11-17 15:45:23.279903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:38.054 [2024-11-17 15:45:23.384130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.054 [2024-11-17 15:45:23.384179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.054 [2024-11-17 15:45:23.384191] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.054 [2024-11-17 15:45:23.384204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.054 [2024-11-17 15:45:23.384213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.054 [2024-11-17 15:45:23.386806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.054 [2024-11-17 15:45:23.386870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:38.054 [2024-11-17 15:45:23.386958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.054 [2024-11-17 15:45:23.386983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:38.623 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.623 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:38.624 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:38.624 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:38.624 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.624 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.624 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:38.624 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.624 15:45:23 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.624 [2024-11-17 15:45:24.022537] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f3f8653e940) succeed. 00:08:38.624 [2024-11-17 15:45:24.031918] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f3f863bd940) succeed. 00:08:38.883 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.883 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:38.883 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.883 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.883 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:38.883 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:38.883 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:38.883 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.883 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.883 Malloc0 00:08:38.883 [2024-11-17 15:45:24.421396] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3173660 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3173660 /var/tmp/bdevperf.sock 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3173660 ']' 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.142 { 00:08:39.142 "params": { 00:08:39.142 "name": "Nvme$subsystem", 00:08:39.142 "trtype": "$TEST_TRANSPORT", 00:08:39.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.142 "adrfam": "ipv4", 00:08:39.142 "trsvcid": "$NVMF_PORT", 00:08:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.142 "hdgst": ${hdgst:-false}, 00:08:39.142 "ddgst": ${ddgst:-false} 00:08:39.142 }, 00:08:39.142 "method": "bdev_nvme_attach_controller" 00:08:39.142 } 00:08:39.142 EOF 00:08:39.142 )") 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:39.142 15:45:24 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.142 "params": { 00:08:39.142 "name": "Nvme0", 00:08:39.142 "trtype": "rdma", 00:08:39.142 "traddr": "192.168.100.8", 00:08:39.142 "adrfam": "ipv4", 00:08:39.142 "trsvcid": "4420", 00:08:39.142 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.142 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.142 "hdgst": false, 00:08:39.142 "ddgst": false 00:08:39.142 }, 00:08:39.142 "method": "bdev_nvme_attach_controller" 00:08:39.142 }' 00:08:39.142 [2024-11-17 15:45:24.558663] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:39.142 [2024-11-17 15:45:24.558769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173660 ] 00:08:39.402 [2024-11-17 15:45:24.689288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.402 [2024-11-17 15:45:24.792083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.662 Running I/O for 10 seconds... 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.922 15:45:25 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:41.120 663.00 IOPS, 41.44 MiB/s [2024-11-17T14:45:26.664Z] [2024-11-17 15:45:26.446506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bdff00 len:0x10000 key:0x181700 00:08:41.120 [2024-11-17 15:45:26.446577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.120 [2024-11-17 15:45:26.446616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:90240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bcfe40 len:0x10000 key:0x181700 00:08:41.120 [2024-11-17 15:45:26.446630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.120 [2024-11-17 15:45:26.446646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bbfd80 len:0x10000 key:0x181700 00:08:41.120 [2024-11-17 15:45:26.446659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.120 [2024-11-17 15:45:26.446674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000bafcc0 len:0x10000 key:0x181700 00:08:41.120 [2024-11-17 15:45:26.446687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.120 [2024-11-17 15:45:26.446701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b9fc00 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.446727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b8fb40 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.446756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b7fa80 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.446783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b6f9c0 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.446812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b5f900 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.446846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b4f840 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.446872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b3f780 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.446901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b2f6c0 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.446929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b1f600 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.446957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000b0f540 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.446983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aff480 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.446996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aef3c0 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.447023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:92160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000adf300 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.447048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000acf240 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.447074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000abf180 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.447100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000aaf0c0 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.447126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a9f000 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.447154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a8ef40 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.447179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201000a7ee80 len:0x10000 key:0x181700 00:08:41.121 [2024-11-17 15:45:26.447205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ae36000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b487000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b466000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bab7000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ba96000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0e7000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0c6000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c717000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6f6000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6d5000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c6b4000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c693000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c672000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c651000 len:0x10000 key:0x182900 00:08:41.121 [2024-11-17 15:45:26.447573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.121 [2024-11-17 15:45:26.447587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c630000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60f000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca0e000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ed000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9cc000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c9ab000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c98a000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c969000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c948000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c927000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c906000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8e5000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8c4000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c8a3000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c882000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.447975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c861000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.447986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c840000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81f000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1e000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfd000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbdc000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbbb000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb9a000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb79000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb58000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb37000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.448269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cb16000 len:0x10000 key:0x182900 00:08:41.122 [2024-11-17 15:45:26.448282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:41.122 [2024-11-17 15:45:26.451577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:41.122 task offset: 90112 on job bdev=Nvme0n1 fails 00:08:41.122 00:08:41.122 Latency(us) 00:08:41.122 [2024-11-17T14:45:26.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:41.122 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:41.122 Job: Nvme0n1 ended in about 1.27 seconds with error 00:08:41.122 Verification LBA range: start 0x0 length 0x400 00:08:41.122 Nvme0n1 : 1.27 523.48 32.72 50.53 0.00 110367.69 2031.62 1013343.85 00:08:41.122 [2024-11-17T14:45:26.666Z] =================================================================================================================== 00:08:41.122 [2024-11-17T14:45:26.666Z] Total : 523.48 32.72 50.53 0.00 110367.69 2031.62 1013343.85 00:08:41.122 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3173660 00:08:41.122 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:41.122 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:41.122 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:41.122 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:41.122 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.122 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.122 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.122 { 00:08:41.122 "params": { 00:08:41.122 "name": "Nvme$subsystem", 00:08:41.122 "trtype": "$TEST_TRANSPORT", 00:08:41.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.123 "adrfam": "ipv4", 00:08:41.123 "trsvcid": "$NVMF_PORT", 00:08:41.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.123 "hdgst": ${hdgst:-false}, 00:08:41.123 "ddgst": ${ddgst:-false} 00:08:41.123 }, 00:08:41.123 "method": "bdev_nvme_attach_controller" 00:08:41.123 } 00:08:41.123 EOF 00:08:41.123 )") 00:08:41.123 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:41.123 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:41.123 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:41.123 15:45:26 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.123 "params": { 00:08:41.123 "name": "Nvme0", 00:08:41.123 "trtype": "rdma", 00:08:41.123 "traddr": "192.168.100.8", 00:08:41.123 "adrfam": "ipv4", 00:08:41.123 "trsvcid": "4420", 00:08:41.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.123 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:41.123 "hdgst": false, 00:08:41.123 "ddgst": false 00:08:41.123 }, 00:08:41.123 "method": "bdev_nvme_attach_controller" 00:08:41.123 }' 00:08:41.123 [2024-11-17 15:45:26.549570] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:41.123 [2024-11-17 15:45:26.549682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3173944 ] 00:08:41.382 [2024-11-17 15:45:26.683622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.382 [2024-11-17 15:45:26.785307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.641 Running I/O for 1 seconds... 00:08:43.021 2733.00 IOPS, 170.81 MiB/s 00:08:43.021 Latency(us) 00:08:43.021 [2024-11-17T14:45:28.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.021 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:43.021 Verification LBA range: start 0x0 length 0x400 00:08:43.021 Nvme0n1 : 1.01 2766.05 172.88 0.00 0.00 22651.05 1153.43 38168.17 00:08:43.021 [2024-11-17T14:45:28.565Z] =================================================================================================================== 00:08:43.021 [2024-11-17T14:45:28.565Z] Total : 2766.05 172.88 0.00 0.00 22651.05 1153.43 38168.17 00:08:43.589 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 3173660 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:43.589 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:43.589 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:43.589 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:43.589 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:43.589 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:43.589 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.589 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:43.590 rmmod nvme_rdma 00:08:43.590 rmmod nvme_fabrics 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3173350 ']' 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3173350 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3173350 ']' 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3173350 00:08:43.590 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:43.849 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.849 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3173350 00:08:43.849 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:43.849 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:43.849 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3173350' 00:08:43.849 killing process with pid 3173350 00:08:43.849 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3173350 00:08:43.849 15:45:29 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3173350 00:08:45.758 [2024-11-17 15:45:30.909092] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:45.758 15:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:45.758 15:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:08:45.758 15:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:45.758 00:08:45.758 real 0m14.821s 00:08:45.758 user 0m34.792s 00:08:45.758 sys 0m6.618s 00:08:45.758 15:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.758 15:45:30 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.758 ************************************ 00:08:45.758 END TEST nvmf_host_management 00:08:45.758 ************************************ 00:08:45.758 15:45:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:45.758 15:45:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.758 15:45:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.758 15:45:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.758 ************************************ 00:08:45.758 START TEST nvmf_lvol 00:08:45.758 ************************************ 00:08:45.758 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:45.758 * Looking for test storage... 00:08:45.758 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.759 --rc genhtml_branch_coverage=1 00:08:45.759 --rc genhtml_function_coverage=1 00:08:45.759 --rc genhtml_legend=1 00:08:45.759 --rc geninfo_all_blocks=1 00:08:45.759 --rc geninfo_unexecuted_blocks=1 00:08:45.759 00:08:45.759 ' 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.759 --rc genhtml_branch_coverage=1 00:08:45.759 --rc genhtml_function_coverage=1 00:08:45.759 --rc genhtml_legend=1 00:08:45.759 --rc geninfo_all_blocks=1 00:08:45.759 --rc geninfo_unexecuted_blocks=1 00:08:45.759 00:08:45.759 ' 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.759 --rc genhtml_branch_coverage=1 00:08:45.759 --rc genhtml_function_coverage=1 00:08:45.759 --rc genhtml_legend=1 00:08:45.759 --rc geninfo_all_blocks=1 00:08:45.759 --rc geninfo_unexecuted_blocks=1 00:08:45.759 00:08:45.759 ' 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.759 --rc genhtml_branch_coverage=1 00:08:45.759 --rc genhtml_function_coverage=1 00:08:45.759 --rc genhtml_legend=1 00:08:45.759 --rc geninfo_all_blocks=1 00:08:45.759 --rc geninfo_unexecuted_blocks=1 00:08:45.759 00:08:45.759 ' 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.759 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.019 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.019 15:45:31 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:52.584 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:52.584 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.584 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:52.585 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:52.585 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # rdma_device_init 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@530 -- # allocate_nic_ips 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:52.585 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:52.585 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:52.585 altname enp217s0f0np0 00:08:52.585 altname ens818f0np0 00:08:52.585 inet 192.168.100.8/24 scope global mlx_0_0 00:08:52.585 valid_lft forever preferred_lft forever 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:52.585 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:52.585 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:52.585 altname enp217s0f1np1 00:08:52.585 altname ens818f1np1 00:08:52.585 inet 192.168.100.9/24 scope global mlx_0_1 00:08:52.585 valid_lft forever preferred_lft forever 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:52.585 15:45:37 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:08:52.585 192.168.100.9' 00:08:52.585 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:08:52.585 192.168.100.9' 00:08:52.585 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # head -n 1 00:08:52.585 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:08:52.586 192.168.100.9' 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # tail -n +2 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # head -n 1 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3178095 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3178095 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3178095 ']' 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.586 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.845 [2024-11-17 15:45:38.147887] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:52.845 [2024-11-17 15:45:38.147997] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.845 [2024-11-17 15:45:38.280287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.845 [2024-11-17 15:45:38.376136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.845 [2024-11-17 15:45:38.376184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.845 [2024-11-17 15:45:38.376196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.845 [2024-11-17 15:45:38.376209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.845 [2024-11-17 15:45:38.376218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.845 [2024-11-17 15:45:38.378411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.845 [2024-11-17 15:45:38.378478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.845 [2024-11-17 15:45:38.378484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.412 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.412 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:53.412 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.412 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.412 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:53.671 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.671 15:45:38 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:53.671 [2024-11-17 15:45:39.175984] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7fc49c376940) succeed. 00:08:53.671 [2024-11-17 15:45:39.185151] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7fc49c331940) succeed. 00:08:53.930 15:45:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.188 15:45:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:54.188 15:45:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.447 15:45:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:54.447 15:45:39 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:54.706 15:45:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:54.964 15:45:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=5c03fe07-2b61-437d-bd11-ae991a93b6a7 00:08:54.964 15:45:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5c03fe07-2b61-437d-bd11-ae991a93b6a7 lvol 20 00:08:54.964 15:45:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d0a5dc7b-108a-4194-a2cc-fe833cf96134 00:08:54.964 15:45:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:55.223 15:45:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d0a5dc7b-108a-4194-a2cc-fe833cf96134 00:08:55.482 15:45:40 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:55.740 [2024-11-17 15:45:41.073734] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:55.740 15:45:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:55.998 15:45:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3178745 00:08:55.998 15:45:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:55.998 15:45:41 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:56.936 15:45:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d0a5dc7b-108a-4194-a2cc-fe833cf96134 MY_SNAPSHOT 00:08:57.196 15:45:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=47ad4804-24d1-4574-9398-bce720ff3252 00:08:57.196 15:45:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d0a5dc7b-108a-4194-a2cc-fe833cf96134 30 00:08:57.196 15:45:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 47ad4804-24d1-4574-9398-bce720ff3252 MY_CLONE 00:08:57.454 15:45:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2624571e-0c55-4ed5-8dd4-11663b867a67 00:08:57.454 15:45:42 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2624571e-0c55-4ed5-8dd4-11663b867a67 00:08:57.713 15:45:43 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3178745 00:09:07.692 Initializing NVMe Controllers 00:09:07.692 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:07.692 Controller IO queue size 128, less than required. 00:09:07.692 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:07.692 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:07.692 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:07.692 Initialization complete. Launching workers. 00:09:07.692 ======================================================== 00:09:07.692 Latency(us) 00:09:07.692 Device Information : IOPS MiB/s Average min max 00:09:07.692 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15249.30 59.57 8394.43 1371.40 141702.48 00:09:07.692 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15202.10 59.38 8420.44 5370.09 153393.98 00:09:07.692 ======================================================== 00:09:07.692 Total : 30451.40 118.95 8407.41 1371.40 153393.98 00:09:07.692 00:09:07.692 15:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:07.692 15:45:52 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d0a5dc7b-108a-4194-a2cc-fe833cf96134 00:09:07.693 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c03fe07-2b61-437d-bd11-ae991a93b6a7 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:07.951 rmmod nvme_rdma 00:09:07.951 rmmod nvme_fabrics 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3178095 ']' 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3178095 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3178095 ']' 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3178095 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.951 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3178095 00:09:08.210 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.210 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.210 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3178095' 00:09:08.210 killing process with pid 3178095 00:09:08.210 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3178095 00:09:08.210 15:45:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3178095 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:10.139 00:09:10.139 real 0m24.282s 00:09:10.139 user 1m16.736s 00:09:10.139 sys 0m6.634s 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:10.139 ************************************ 00:09:10.139 END TEST nvmf_lvol 00:09:10.139 ************************************ 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.139 ************************************ 00:09:10.139 START TEST nvmf_lvs_grow 00:09:10.139 ************************************ 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:09:10.139 * Looking for test storage... 00:09:10.139 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:10.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.139 --rc genhtml_branch_coverage=1 00:09:10.139 --rc genhtml_function_coverage=1 00:09:10.139 --rc genhtml_legend=1 00:09:10.139 --rc geninfo_all_blocks=1 00:09:10.139 --rc geninfo_unexecuted_blocks=1 00:09:10.139 00:09:10.139 ' 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:10.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.139 --rc genhtml_branch_coverage=1 00:09:10.139 --rc genhtml_function_coverage=1 00:09:10.139 --rc genhtml_legend=1 00:09:10.139 --rc geninfo_all_blocks=1 00:09:10.139 --rc geninfo_unexecuted_blocks=1 00:09:10.139 00:09:10.139 ' 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:10.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.139 --rc genhtml_branch_coverage=1 00:09:10.139 --rc genhtml_function_coverage=1 00:09:10.139 --rc genhtml_legend=1 00:09:10.139 --rc geninfo_all_blocks=1 00:09:10.139 --rc geninfo_unexecuted_blocks=1 00:09:10.139 00:09:10.139 ' 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:10.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.139 --rc genhtml_branch_coverage=1 00:09:10.139 --rc genhtml_function_coverage=1 00:09:10.139 --rc genhtml_legend=1 00:09:10.139 --rc geninfo_all_blocks=1 00:09:10.139 --rc geninfo_unexecuted_blocks=1 00:09:10.139 00:09:10.139 ' 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.139 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.140 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.140 15:45:55 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.706 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:16.707 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:16.707 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:16.707 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:16.707 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # rdma_device_init 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@530 -- # allocate_nic_ips 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:16.707 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:16.707 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:16.707 altname enp217s0f0np0 00:09:16.707 altname ens818f0np0 00:09:16.707 inet 192.168.100.8/24 scope global mlx_0_0 00:09:16.707 valid_lft forever preferred_lft forever 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:16.707 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:16.707 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:16.707 altname enp217s0f1np1 00:09:16.707 altname ens818f1np1 00:09:16.707 inet 192.168.100.9/24 scope global mlx_0_1 00:09:16.707 valid_lft forever preferred_lft forever 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:16.707 15:46:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:16.707 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:16.707 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.707 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.707 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:16.707 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:16.707 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.707 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:16.707 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:09:16.708 192.168.100.9' 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:09:16.708 192.168.100.9' 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # head -n 1 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:09:16.708 192.168.100.9' 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # tail -n +2 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # head -n 1 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3184335 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3184335 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3184335 ']' 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.708 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:16.708 [2024-11-17 15:46:02.171482] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:16.708 [2024-11-17 15:46:02.171585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.967 [2024-11-17 15:46:02.303337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.967 [2024-11-17 15:46:02.398171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.967 [2024-11-17 15:46:02.398219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.967 [2024-11-17 15:46:02.398231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.967 [2024-11-17 15:46:02.398245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.967 [2024-11-17 15:46:02.398254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.967 [2024-11-17 15:46:02.399628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.596 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.596 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:17.596 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.596 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.596 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.596 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.596 15:46:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:17.855 [2024-11-17 15:46:03.185310] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f2c203a4940) succeed. 00:09:17.855 [2024-11-17 15:46:03.193922] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f2c20360940) succeed. 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.855 ************************************ 00:09:17.855 START TEST lvs_grow_clean 00:09:17.855 ************************************ 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.855 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.114 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:18.114 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:18.373 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:18.373 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:18.373 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:18.374 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:18.374 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:18.374 15:46:03 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 lvol 150 00:09:18.632 15:46:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4b7b338f-d180-4213-a5d8-91366b4194ef 00:09:18.632 15:46:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:18.632 15:46:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:18.891 [2024-11-17 15:46:04.267509] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:18.891 [2024-11-17 15:46:04.267586] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:18.891 true 00:09:18.891 15:46:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:18.891 15:46:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:19.149 15:46:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:19.149 15:46:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:19.149 15:46:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4b7b338f-d180-4213-a5d8-91366b4194ef 00:09:19.408 15:46:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:19.667 [2024-11-17 15:46:04.986016] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:19.667 15:46:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:19.667 15:46:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3184913 00:09:19.667 15:46:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.667 15:46:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3184913 /var/tmp/bdevperf.sock 00:09:19.667 15:46:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3184913 ']' 00:09:19.667 15:46:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:19.667 15:46:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.667 15:46:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:19.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:19.667 15:46:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.667 15:46:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:19.667 15:46:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:19.926 [2024-11-17 15:46:05.250495] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:19.926 [2024-11-17 15:46:05.250596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3184913 ] 00:09:19.926 [2024-11-17 15:46:05.379801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.184 [2024-11-17 15:46:05.478041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.752 15:46:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.752 15:46:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:20.752 15:46:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:20.752 Nvme0n1 00:09:21.011 15:46:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:21.011 [ 00:09:21.011 { 00:09:21.011 "name": "Nvme0n1", 00:09:21.011 "aliases": [ 00:09:21.011 "4b7b338f-d180-4213-a5d8-91366b4194ef" 00:09:21.011 ], 00:09:21.011 "product_name": "NVMe disk", 00:09:21.011 "block_size": 4096, 00:09:21.011 "num_blocks": 38912, 00:09:21.011 "uuid": "4b7b338f-d180-4213-a5d8-91366b4194ef", 00:09:21.011 "numa_id": 1, 00:09:21.011 "assigned_rate_limits": { 00:09:21.011 "rw_ios_per_sec": 0, 00:09:21.011 "rw_mbytes_per_sec": 0, 00:09:21.011 "r_mbytes_per_sec": 0, 00:09:21.011 "w_mbytes_per_sec": 0 00:09:21.011 }, 00:09:21.011 "claimed": false, 00:09:21.011 "zoned": false, 00:09:21.011 "supported_io_types": { 00:09:21.011 "read": true, 00:09:21.011 "write": true, 00:09:21.011 "unmap": true, 00:09:21.011 "flush": true, 00:09:21.011 "reset": true, 00:09:21.011 "nvme_admin": true, 00:09:21.011 "nvme_io": true, 00:09:21.011 "nvme_io_md": false, 00:09:21.011 "write_zeroes": true, 00:09:21.011 "zcopy": false, 00:09:21.011 "get_zone_info": false, 00:09:21.011 "zone_management": false, 00:09:21.011 "zone_append": false, 00:09:21.011 "compare": true, 00:09:21.011 "compare_and_write": true, 00:09:21.011 "abort": true, 00:09:21.011 "seek_hole": false, 00:09:21.011 "seek_data": false, 00:09:21.011 "copy": true, 00:09:21.011 "nvme_iov_md": false 00:09:21.011 }, 00:09:21.011 "memory_domains": [ 00:09:21.011 { 00:09:21.011 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:21.011 "dma_device_type": 0 00:09:21.011 } 00:09:21.011 ], 00:09:21.011 "driver_specific": { 00:09:21.011 "nvme": [ 00:09:21.011 { 00:09:21.011 "trid": { 00:09:21.011 "trtype": "RDMA", 00:09:21.011 "adrfam": "IPv4", 00:09:21.011 "traddr": "192.168.100.8", 00:09:21.011 "trsvcid": "4420", 00:09:21.011 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:21.011 }, 00:09:21.011 "ctrlr_data": { 00:09:21.011 "cntlid": 1, 00:09:21.011 "vendor_id": "0x8086", 00:09:21.011 "model_number": "SPDK bdev Controller", 00:09:21.011 "serial_number": "SPDK0", 00:09:21.011 "firmware_revision": "25.01", 00:09:21.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:21.011 "oacs": { 00:09:21.011 "security": 0, 00:09:21.011 "format": 0, 00:09:21.011 "firmware": 0, 00:09:21.011 "ns_manage": 0 00:09:21.011 }, 00:09:21.011 "multi_ctrlr": true, 00:09:21.011 "ana_reporting": false 00:09:21.011 }, 00:09:21.011 "vs": { 00:09:21.011 "nvme_version": "1.3" 00:09:21.011 }, 00:09:21.011 "ns_data": { 00:09:21.011 "id": 1, 00:09:21.011 "can_share": true 00:09:21.011 } 00:09:21.011 } 00:09:21.011 ], 00:09:21.011 "mp_policy": "active_passive" 00:09:21.011 } 00:09:21.011 } 00:09:21.011 ] 00:09:21.011 15:46:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3185180 00:09:21.011 15:46:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:21.011 15:46:06 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:21.270 Running I/O for 10 seconds... 00:09:22.206 Latency(us) 00:09:22.206 [2024-11-17T14:46:07.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.206 Nvme0n1 : 1.00 30335.00 118.50 0.00 0.00 0.00 0.00 0.00 00:09:22.206 [2024-11-17T14:46:07.750Z] =================================================================================================================== 00:09:22.206 [2024-11-17T14:46:07.750Z] Total : 30335.00 118.50 0.00 0.00 0.00 0.00 0.00 00:09:22.207 00:09:23.142 15:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:23.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.142 Nvme0n1 : 2.00 30718.50 119.99 0.00 0.00 0.00 0.00 0.00 00:09:23.142 [2024-11-17T14:46:08.686Z] =================================================================================================================== 00:09:23.142 [2024-11-17T14:46:08.687Z] Total : 30718.50 119.99 0.00 0.00 0.00 0.00 0.00 00:09:23.143 00:09:23.143 true 00:09:23.402 15:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:23.402 15:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:23.402 15:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:23.402 15:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:23.402 15:46:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3185180 00:09:24.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.337 Nvme0n1 : 3.00 30774.67 120.21 0.00 0.00 0.00 0.00 0.00 00:09:24.337 [2024-11-17T14:46:09.881Z] =================================================================================================================== 00:09:24.337 [2024-11-17T14:46:09.881Z] Total : 30774.67 120.21 0.00 0.00 0.00 0.00 0.00 00:09:24.337 00:09:25.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.272 Nvme0n1 : 4.00 30751.50 120.12 0.00 0.00 0.00 0.00 0.00 00:09:25.272 [2024-11-17T14:46:10.816Z] =================================================================================================================== 00:09:25.272 [2024-11-17T14:46:10.816Z] Total : 30751.50 120.12 0.00 0.00 0.00 0.00 0.00 00:09:25.272 00:09:26.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.208 Nvme0n1 : 5.00 30835.80 120.45 0.00 0.00 0.00 0.00 0.00 00:09:26.208 [2024-11-17T14:46:11.752Z] =================================================================================================================== 00:09:26.208 [2024-11-17T14:46:11.752Z] Total : 30835.80 120.45 0.00 0.00 0.00 0.00 0.00 00:09:26.208 00:09:27.143 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.143 Nvme0n1 : 6.00 30928.33 120.81 0.00 0.00 0.00 0.00 0.00 00:09:27.143 [2024-11-17T14:46:12.687Z] =================================================================================================================== 00:09:27.143 [2024-11-17T14:46:12.687Z] Total : 30928.33 120.81 0.00 0.00 0.00 0.00 0.00 00:09:27.143 00:09:28.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.078 Nvme0n1 : 7.00 31003.00 121.11 0.00 0.00 0.00 0.00 0.00 00:09:28.078 [2024-11-17T14:46:13.622Z] =================================================================================================================== 00:09:28.078 [2024-11-17T14:46:13.622Z] Total : 31003.00 121.11 0.00 0.00 0.00 0.00 0.00 00:09:28.078 00:09:29.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.455 Nvme0n1 : 8.00 31059.75 121.33 0.00 0.00 0.00 0.00 0.00 00:09:29.455 [2024-11-17T14:46:14.999Z] =================================================================================================================== 00:09:29.455 [2024-11-17T14:46:14.999Z] Total : 31059.75 121.33 0.00 0.00 0.00 0.00 0.00 00:09:29.455 00:09:30.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.390 Nvme0n1 : 9.00 31096.89 121.47 0.00 0.00 0.00 0.00 0.00 00:09:30.390 [2024-11-17T14:46:15.935Z] =================================================================================================================== 00:09:30.391 [2024-11-17T14:46:15.935Z] Total : 31096.89 121.47 0.00 0.00 0.00 0.00 0.00 00:09:30.391 00:09:31.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.328 Nvme0n1 : 10.00 31132.80 121.61 0.00 0.00 0.00 0.00 0.00 00:09:31.328 [2024-11-17T14:46:16.872Z] =================================================================================================================== 00:09:31.328 [2024-11-17T14:46:16.872Z] Total : 31132.80 121.61 0.00 0.00 0.00 0.00 0.00 00:09:31.328 00:09:31.328 00:09:31.328 Latency(us) 00:09:31.328 [2024-11-17T14:46:16.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.328 Nvme0n1 : 10.00 31134.08 121.62 0.00 0.00 4108.19 2949.12 16777.22 00:09:31.328 [2024-11-17T14:46:16.872Z] =================================================================================================================== 00:09:31.328 [2024-11-17T14:46:16.872Z] Total : 31134.08 121.62 0.00 0.00 4108.19 2949.12 16777.22 00:09:31.328 { 00:09:31.328 "results": [ 00:09:31.328 { 00:09:31.328 "job": "Nvme0n1", 00:09:31.328 "core_mask": "0x2", 00:09:31.328 "workload": "randwrite", 00:09:31.328 "status": "finished", 00:09:31.328 "queue_depth": 128, 00:09:31.328 "io_size": 4096, 00:09:31.328 "runtime": 10.003571, 00:09:31.328 "iops": 31134.082019310903, 00:09:31.328 "mibps": 121.61750788793321, 00:09:31.328 "io_failed": 0, 00:09:31.328 "io_timeout": 0, 00:09:31.328 "avg_latency_us": 4108.187976440671, 00:09:31.328 "min_latency_us": 2949.12, 00:09:31.328 "max_latency_us": 16777.216 00:09:31.328 } 00:09:31.328 ], 00:09:31.328 "core_count": 1 00:09:31.328 } 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3184913 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3184913 ']' 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3184913 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3184913 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3184913' 00:09:31.328 killing process with pid 3184913 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3184913 00:09:31.328 Received shutdown signal, test time was about 10.000000 seconds 00:09:31.328 00:09:31.328 Latency(us) 00:09:31.328 [2024-11-17T14:46:16.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.328 [2024-11-17T14:46:16.872Z] =================================================================================================================== 00:09:31.328 [2024-11-17T14:46:16.872Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:31.328 15:46:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3184913 00:09:32.264 15:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:32.264 15:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:32.522 15:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:32.522 15:46:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:32.781 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:32.781 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:32.781 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:32.781 [2024-11-17 15:46:18.306389] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:33.040 request: 00:09:33.040 { 00:09:33.040 "uuid": "4a8fb99c-3489-485b-a9e8-eacd7d2d3b12", 00:09:33.040 "method": "bdev_lvol_get_lvstores", 00:09:33.040 "req_id": 1 00:09:33.040 } 00:09:33.040 Got JSON-RPC error response 00:09:33.040 response: 00:09:33.040 { 00:09:33.040 "code": -19, 00:09:33.040 "message": "No such device" 00:09:33.040 } 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:33.040 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:33.041 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:33.300 aio_bdev 00:09:33.300 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4b7b338f-d180-4213-a5d8-91366b4194ef 00:09:33.300 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=4b7b338f-d180-4213-a5d8-91366b4194ef 00:09:33.300 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.300 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:33.300 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.300 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.300 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:33.559 15:46:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4b7b338f-d180-4213-a5d8-91366b4194ef -t 2000 00:09:33.816 [ 00:09:33.816 { 00:09:33.816 "name": "4b7b338f-d180-4213-a5d8-91366b4194ef", 00:09:33.816 "aliases": [ 00:09:33.816 "lvs/lvol" 00:09:33.816 ], 00:09:33.816 "product_name": "Logical Volume", 00:09:33.816 "block_size": 4096, 00:09:33.816 "num_blocks": 38912, 00:09:33.816 "uuid": "4b7b338f-d180-4213-a5d8-91366b4194ef", 00:09:33.816 "assigned_rate_limits": { 00:09:33.816 "rw_ios_per_sec": 0, 00:09:33.816 "rw_mbytes_per_sec": 0, 00:09:33.816 "r_mbytes_per_sec": 0, 00:09:33.816 "w_mbytes_per_sec": 0 00:09:33.816 }, 00:09:33.816 "claimed": false, 00:09:33.816 "zoned": false, 00:09:33.816 "supported_io_types": { 00:09:33.816 "read": true, 00:09:33.816 "write": true, 00:09:33.816 "unmap": true, 00:09:33.816 "flush": false, 00:09:33.816 "reset": true, 00:09:33.816 "nvme_admin": false, 00:09:33.816 "nvme_io": false, 00:09:33.816 "nvme_io_md": false, 00:09:33.816 "write_zeroes": true, 00:09:33.816 "zcopy": false, 00:09:33.816 "get_zone_info": false, 00:09:33.816 "zone_management": false, 00:09:33.816 "zone_append": false, 00:09:33.816 "compare": false, 00:09:33.816 "compare_and_write": false, 00:09:33.816 "abort": false, 00:09:33.816 "seek_hole": true, 00:09:33.816 "seek_data": true, 00:09:33.816 "copy": false, 00:09:33.816 "nvme_iov_md": false 00:09:33.816 }, 00:09:33.816 "driver_specific": { 00:09:33.816 "lvol": { 00:09:33.816 "lvol_store_uuid": "4a8fb99c-3489-485b-a9e8-eacd7d2d3b12", 00:09:33.816 "base_bdev": "aio_bdev", 00:09:33.816 "thin_provision": false, 00:09:33.816 "num_allocated_clusters": 38, 00:09:33.816 "snapshot": false, 00:09:33.816 "clone": false, 00:09:33.816 "esnap_clone": false 00:09:33.816 } 00:09:33.816 } 00:09:33.816 } 00:09:33.816 ] 00:09:33.816 15:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:33.816 15:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:33.816 15:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:33.816 15:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:33.816 15:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:33.816 15:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:34.074 15:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:34.074 15:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4b7b338f-d180-4213-a5d8-91366b4194ef 00:09:34.333 15:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4a8fb99c-3489-485b-a9e8-eacd7d2d3b12 00:09:34.592 15:46:19 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:34.592 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.592 00:09:34.592 real 0m16.805s 00:09:34.592 user 0m16.595s 00:09:34.592 sys 0m1.293s 00:09:34.592 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.592 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:34.592 ************************************ 00:09:34.592 END TEST lvs_grow_clean 00:09:34.592 ************************************ 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:34.850 ************************************ 00:09:34.850 START TEST lvs_grow_dirty 00:09:34.850 ************************************ 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.850 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:34.851 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.109 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:35.109 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:35.109 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=369948c6-4907-4392-884e-fcd8372579f4 00:09:35.109 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:35.109 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:35.368 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:35.368 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:35.368 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 369948c6-4907-4392-884e-fcd8372579f4 lvol 150 00:09:35.627 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=192c01ea-69eb-4997-a226-3184f8f02a3f 00:09:35.627 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.627 15:46:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:35.627 [2024-11-17 15:46:21.155485] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:35.627 [2024-11-17 15:46:21.155557] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:35.627 true 00:09:35.886 15:46:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:35.886 15:46:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:35.886 15:46:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:35.886 15:46:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:36.145 15:46:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 192c01ea-69eb-4997-a226-3184f8f02a3f 00:09:36.404 15:46:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:36.404 [2024-11-17 15:46:21.893999] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:36.404 15:46:21 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:36.663 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3187908 00:09:36.663 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:36.663 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:36.663 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3187908 /var/tmp/bdevperf.sock 00:09:36.663 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3187908 ']' 00:09:36.663 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:36.663 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.663 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:36.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:36.663 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.663 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:36.663 [2024-11-17 15:46:22.172173] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:36.663 [2024-11-17 15:46:22.172264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3187908 ] 00:09:36.922 [2024-11-17 15:46:22.304972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.922 [2024-11-17 15:46:22.403240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.490 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.490 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:37.490 15:46:22 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:37.749 Nvme0n1 00:09:37.749 15:46:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:38.008 [ 00:09:38.008 { 00:09:38.008 "name": "Nvme0n1", 00:09:38.008 "aliases": [ 00:09:38.008 "192c01ea-69eb-4997-a226-3184f8f02a3f" 00:09:38.008 ], 00:09:38.008 "product_name": "NVMe disk", 00:09:38.008 "block_size": 4096, 00:09:38.008 "num_blocks": 38912, 00:09:38.008 "uuid": "192c01ea-69eb-4997-a226-3184f8f02a3f", 00:09:38.008 "numa_id": 1, 00:09:38.008 "assigned_rate_limits": { 00:09:38.008 "rw_ios_per_sec": 0, 00:09:38.008 "rw_mbytes_per_sec": 0, 00:09:38.008 "r_mbytes_per_sec": 0, 00:09:38.008 "w_mbytes_per_sec": 0 00:09:38.008 }, 00:09:38.008 "claimed": false, 00:09:38.008 "zoned": false, 00:09:38.008 "supported_io_types": { 00:09:38.008 "read": true, 00:09:38.008 "write": true, 00:09:38.008 "unmap": true, 00:09:38.008 "flush": true, 00:09:38.008 "reset": true, 00:09:38.008 "nvme_admin": true, 00:09:38.008 "nvme_io": true, 00:09:38.008 "nvme_io_md": false, 00:09:38.008 "write_zeroes": true, 00:09:38.008 "zcopy": false, 00:09:38.008 "get_zone_info": false, 00:09:38.008 "zone_management": false, 00:09:38.008 "zone_append": false, 00:09:38.008 "compare": true, 00:09:38.008 "compare_and_write": true, 00:09:38.008 "abort": true, 00:09:38.008 "seek_hole": false, 00:09:38.008 "seek_data": false, 00:09:38.008 "copy": true, 00:09:38.008 "nvme_iov_md": false 00:09:38.008 }, 00:09:38.008 "memory_domains": [ 00:09:38.008 { 00:09:38.008 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:38.008 "dma_device_type": 0 00:09:38.008 } 00:09:38.008 ], 00:09:38.008 "driver_specific": { 00:09:38.008 "nvme": [ 00:09:38.008 { 00:09:38.008 "trid": { 00:09:38.008 "trtype": "RDMA", 00:09:38.008 "adrfam": "IPv4", 00:09:38.008 "traddr": "192.168.100.8", 00:09:38.008 "trsvcid": "4420", 00:09:38.008 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:38.008 }, 00:09:38.008 "ctrlr_data": { 00:09:38.008 "cntlid": 1, 00:09:38.008 "vendor_id": "0x8086", 00:09:38.008 "model_number": "SPDK bdev Controller", 00:09:38.008 "serial_number": "SPDK0", 00:09:38.008 "firmware_revision": "25.01", 00:09:38.008 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:38.008 "oacs": { 00:09:38.008 "security": 0, 00:09:38.008 "format": 0, 00:09:38.008 "firmware": 0, 00:09:38.008 "ns_manage": 0 00:09:38.008 }, 00:09:38.008 "multi_ctrlr": true, 00:09:38.008 "ana_reporting": false 00:09:38.008 }, 00:09:38.008 "vs": { 00:09:38.008 "nvme_version": "1.3" 00:09:38.008 }, 00:09:38.008 "ns_data": { 00:09:38.008 "id": 1, 00:09:38.008 "can_share": true 00:09:38.008 } 00:09:38.008 } 00:09:38.008 ], 00:09:38.008 "mp_policy": "active_passive" 00:09:38.008 } 00:09:38.008 } 00:09:38.008 ] 00:09:38.008 15:46:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3188180 00:09:38.008 15:46:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:38.008 15:46:23 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:38.008 Running I/O for 10 seconds... 00:09:39.005 Latency(us) 00:09:39.005 [2024-11-17T14:46:24.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.005 Nvme0n1 : 1.00 30240.00 118.12 0.00 0.00 0.00 0.00 0.00 00:09:39.005 [2024-11-17T14:46:24.549Z] =================================================================================================================== 00:09:39.005 [2024-11-17T14:46:24.549Z] Total : 30240.00 118.12 0.00 0.00 0.00 0.00 0.00 00:09:39.005 00:09:39.941 15:46:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:40.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.200 Nvme0n1 : 2.00 30624.50 119.63 0.00 0.00 0.00 0.00 0.00 00:09:40.200 [2024-11-17T14:46:25.744Z] =================================================================================================================== 00:09:40.200 [2024-11-17T14:46:25.744Z] Total : 30624.50 119.63 0.00 0.00 0.00 0.00 0.00 00:09:40.200 00:09:40.200 true 00:09:40.200 15:46:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:40.200 15:46:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:40.459 15:46:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:40.459 15:46:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:40.459 15:46:25 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3188180 00:09:41.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.026 Nvme0n1 : 3.00 30699.00 119.92 0.00 0.00 0.00 0.00 0.00 00:09:41.026 [2024-11-17T14:46:26.570Z] =================================================================================================================== 00:09:41.026 [2024-11-17T14:46:26.570Z] Total : 30699.00 119.92 0.00 0.00 0.00 0.00 0.00 00:09:41.026 00:09:42.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.402 Nvme0n1 : 4.00 30840.00 120.47 0.00 0.00 0.00 0.00 0.00 00:09:42.402 [2024-11-17T14:46:27.946Z] =================================================================================================================== 00:09:42.402 [2024-11-17T14:46:27.946Z] Total : 30840.00 120.47 0.00 0.00 0.00 0.00 0.00 00:09:42.402 00:09:43.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.339 Nvme0n1 : 5.00 30937.80 120.85 0.00 0.00 0.00 0.00 0.00 00:09:43.339 [2024-11-17T14:46:28.883Z] =================================================================================================================== 00:09:43.339 [2024-11-17T14:46:28.883Z] Total : 30937.80 120.85 0.00 0.00 0.00 0.00 0.00 00:09:43.339 00:09:44.271 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.271 Nvme0n1 : 6.00 31008.17 121.13 0.00 0.00 0.00 0.00 0.00 00:09:44.271 [2024-11-17T14:46:29.815Z] =================================================================================================================== 00:09:44.271 [2024-11-17T14:46:29.815Z] Total : 31008.17 121.13 0.00 0.00 0.00 0.00 0.00 00:09:44.271 00:09:45.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.205 Nvme0n1 : 7.00 30976.29 121.00 0.00 0.00 0.00 0.00 0.00 00:09:45.205 [2024-11-17T14:46:30.749Z] =================================================================================================================== 00:09:45.205 [2024-11-17T14:46:30.749Z] Total : 30976.29 121.00 0.00 0.00 0.00 0.00 0.00 00:09:45.205 00:09:46.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.142 Nvme0n1 : 8.00 31011.88 121.14 0.00 0.00 0.00 0.00 0.00 00:09:46.142 [2024-11-17T14:46:31.686Z] =================================================================================================================== 00:09:46.142 [2024-11-17T14:46:31.686Z] Total : 31011.88 121.14 0.00 0.00 0.00 0.00 0.00 00:09:46.142 00:09:47.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.078 Nvme0n1 : 9.00 31058.00 121.32 0.00 0.00 0.00 0.00 0.00 00:09:47.078 [2024-11-17T14:46:32.622Z] =================================================================================================================== 00:09:47.078 [2024-11-17T14:46:32.622Z] Total : 31058.00 121.32 0.00 0.00 0.00 0.00 0.00 00:09:47.078 00:09:48.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.014 Nvme0n1 : 10.00 31094.20 121.46 0.00 0.00 0.00 0.00 0.00 00:09:48.014 [2024-11-17T14:46:33.558Z] =================================================================================================================== 00:09:48.014 [2024-11-17T14:46:33.558Z] Total : 31094.20 121.46 0.00 0.00 0.00 0.00 0.00 00:09:48.014 00:09:48.014 00:09:48.014 Latency(us) 00:09:48.014 [2024-11-17T14:46:33.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.014 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.014 Nvme0n1 : 10.00 31093.67 121.46 0.00 0.00 4113.31 2831.16 19398.66 00:09:48.014 [2024-11-17T14:46:33.558Z] =================================================================================================================== 00:09:48.014 [2024-11-17T14:46:33.558Z] Total : 31093.67 121.46 0.00 0.00 4113.31 2831.16 19398.66 00:09:48.014 { 00:09:48.014 "results": [ 00:09:48.014 { 00:09:48.014 "job": "Nvme0n1", 00:09:48.014 "core_mask": "0x2", 00:09:48.014 "workload": "randwrite", 00:09:48.014 "status": "finished", 00:09:48.014 "queue_depth": 128, 00:09:48.014 "io_size": 4096, 00:09:48.014 "runtime": 10.004288, 00:09:48.014 "iops": 31093.667035575145, 00:09:48.014 "mibps": 121.45963685771541, 00:09:48.014 "io_failed": 0, 00:09:48.014 "io_timeout": 0, 00:09:48.014 "avg_latency_us": 4113.314865279198, 00:09:48.014 "min_latency_us": 2831.1552, 00:09:48.014 "max_latency_us": 19398.656 00:09:48.014 } 00:09:48.014 ], 00:09:48.014 "core_count": 1 00:09:48.014 } 00:09:48.014 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3187908 00:09:48.014 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3187908 ']' 00:09:48.014 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3187908 00:09:48.273 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:48.273 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.273 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3187908 00:09:48.273 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:48.273 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:48.273 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3187908' 00:09:48.273 killing process with pid 3187908 00:09:48.273 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3187908 00:09:48.273 Received shutdown signal, test time was about 10.000000 seconds 00:09:48.273 00:09:48.273 Latency(us) 00:09:48.273 [2024-11-17T14:46:33.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.273 [2024-11-17T14:46:33.817Z] =================================================================================================================== 00:09:48.273 [2024-11-17T14:46:33.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:48.273 15:46:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3187908 00:09:49.209 15:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:49.209 15:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.468 15:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:49.468 15:46:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3184335 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3184335 00:09:49.727 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3184335 Killed "${NVMF_APP[@]}" "$@" 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3190161 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3190161 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3190161 ']' 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.727 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.727 [2024-11-17 15:46:35.193089] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:49.727 [2024-11-17 15:46:35.193181] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.986 [2024-11-17 15:46:35.332713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.986 [2024-11-17 15:46:35.427019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.986 [2024-11-17 15:46:35.427067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.986 [2024-11-17 15:46:35.427079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.986 [2024-11-17 15:46:35.427091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.986 [2024-11-17 15:46:35.427100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.986 [2024-11-17 15:46:35.428438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.553 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.554 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:50.554 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:50.554 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:50.554 15:46:35 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.554 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.554 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:50.812 [2024-11-17 15:46:36.193042] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:50.812 [2024-11-17 15:46:36.193182] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:50.812 [2024-11-17 15:46:36.193219] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:50.812 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:50.812 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 192c01ea-69eb-4997-a226-3184f8f02a3f 00:09:50.812 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=192c01ea-69eb-4997-a226-3184f8f02a3f 00:09:50.813 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.813 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:50.813 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.813 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.813 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:51.071 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 192c01ea-69eb-4997-a226-3184f8f02a3f -t 2000 00:09:51.071 [ 00:09:51.071 { 00:09:51.071 "name": "192c01ea-69eb-4997-a226-3184f8f02a3f", 00:09:51.071 "aliases": [ 00:09:51.071 "lvs/lvol" 00:09:51.071 ], 00:09:51.071 "product_name": "Logical Volume", 00:09:51.071 "block_size": 4096, 00:09:51.071 "num_blocks": 38912, 00:09:51.071 "uuid": "192c01ea-69eb-4997-a226-3184f8f02a3f", 00:09:51.071 "assigned_rate_limits": { 00:09:51.071 "rw_ios_per_sec": 0, 00:09:51.071 "rw_mbytes_per_sec": 0, 00:09:51.071 "r_mbytes_per_sec": 0, 00:09:51.071 "w_mbytes_per_sec": 0 00:09:51.071 }, 00:09:51.071 "claimed": false, 00:09:51.071 "zoned": false, 00:09:51.071 "supported_io_types": { 00:09:51.071 "read": true, 00:09:51.071 "write": true, 00:09:51.071 "unmap": true, 00:09:51.071 "flush": false, 00:09:51.071 "reset": true, 00:09:51.071 "nvme_admin": false, 00:09:51.071 "nvme_io": false, 00:09:51.071 "nvme_io_md": false, 00:09:51.071 "write_zeroes": true, 00:09:51.071 "zcopy": false, 00:09:51.072 "get_zone_info": false, 00:09:51.072 "zone_management": false, 00:09:51.072 "zone_append": false, 00:09:51.072 "compare": false, 00:09:51.072 "compare_and_write": false, 00:09:51.072 "abort": false, 00:09:51.072 "seek_hole": true, 00:09:51.072 "seek_data": true, 00:09:51.072 "copy": false, 00:09:51.072 "nvme_iov_md": false 00:09:51.072 }, 00:09:51.072 "driver_specific": { 00:09:51.072 "lvol": { 00:09:51.072 "lvol_store_uuid": "369948c6-4907-4392-884e-fcd8372579f4", 00:09:51.072 "base_bdev": "aio_bdev", 00:09:51.072 "thin_provision": false, 00:09:51.072 "num_allocated_clusters": 38, 00:09:51.072 "snapshot": false, 00:09:51.072 "clone": false, 00:09:51.072 "esnap_clone": false 00:09:51.072 } 00:09:51.072 } 00:09:51.072 } 00:09:51.072 ] 00:09:51.072 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:51.072 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:51.072 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:51.330 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:51.330 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:51.331 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:51.590 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:51.590 15:46:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:51.848 [2024-11-17 15:46:37.157409] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:51.848 request: 00:09:51.848 { 00:09:51.848 "uuid": "369948c6-4907-4392-884e-fcd8372579f4", 00:09:51.848 "method": "bdev_lvol_get_lvstores", 00:09:51.848 "req_id": 1 00:09:51.848 } 00:09:51.848 Got JSON-RPC error response 00:09:51.848 response: 00:09:51.848 { 00:09:51.848 "code": -19, 00:09:51.848 "message": "No such device" 00:09:51.848 } 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:51.848 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:52.107 aio_bdev 00:09:52.107 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 192c01ea-69eb-4997-a226-3184f8f02a3f 00:09:52.107 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=192c01ea-69eb-4997-a226-3184f8f02a3f 00:09:52.107 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.107 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:52.107 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.107 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.107 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:52.366 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 192c01ea-69eb-4997-a226-3184f8f02a3f -t 2000 00:09:52.366 [ 00:09:52.366 { 00:09:52.366 "name": "192c01ea-69eb-4997-a226-3184f8f02a3f", 00:09:52.366 "aliases": [ 00:09:52.366 "lvs/lvol" 00:09:52.366 ], 00:09:52.366 "product_name": "Logical Volume", 00:09:52.366 "block_size": 4096, 00:09:52.366 "num_blocks": 38912, 00:09:52.366 "uuid": "192c01ea-69eb-4997-a226-3184f8f02a3f", 00:09:52.366 "assigned_rate_limits": { 00:09:52.366 "rw_ios_per_sec": 0, 00:09:52.366 "rw_mbytes_per_sec": 0, 00:09:52.366 "r_mbytes_per_sec": 0, 00:09:52.366 "w_mbytes_per_sec": 0 00:09:52.366 }, 00:09:52.366 "claimed": false, 00:09:52.366 "zoned": false, 00:09:52.366 "supported_io_types": { 00:09:52.366 "read": true, 00:09:52.366 "write": true, 00:09:52.366 "unmap": true, 00:09:52.366 "flush": false, 00:09:52.366 "reset": true, 00:09:52.366 "nvme_admin": false, 00:09:52.366 "nvme_io": false, 00:09:52.366 "nvme_io_md": false, 00:09:52.366 "write_zeroes": true, 00:09:52.366 "zcopy": false, 00:09:52.366 "get_zone_info": false, 00:09:52.366 "zone_management": false, 00:09:52.366 "zone_append": false, 00:09:52.366 "compare": false, 00:09:52.366 "compare_and_write": false, 00:09:52.366 "abort": false, 00:09:52.366 "seek_hole": true, 00:09:52.366 "seek_data": true, 00:09:52.366 "copy": false, 00:09:52.366 "nvme_iov_md": false 00:09:52.366 }, 00:09:52.366 "driver_specific": { 00:09:52.366 "lvol": { 00:09:52.366 "lvol_store_uuid": "369948c6-4907-4392-884e-fcd8372579f4", 00:09:52.366 "base_bdev": "aio_bdev", 00:09:52.366 "thin_provision": false, 00:09:52.366 "num_allocated_clusters": 38, 00:09:52.366 "snapshot": false, 00:09:52.366 "clone": false, 00:09:52.366 "esnap_clone": false 00:09:52.366 } 00:09:52.366 } 00:09:52.366 } 00:09:52.366 ] 00:09:52.624 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:52.624 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:52.624 15:46:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:52.624 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:52.624 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:52.624 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:52.882 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:52.882 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 192c01ea-69eb-4997-a226-3184f8f02a3f 00:09:53.141 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 369948c6-4907-4392-884e-fcd8372579f4 00:09:53.141 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:53.399 00:09:53.399 real 0m18.673s 00:09:53.399 user 0m48.503s 00:09:53.399 sys 0m3.440s 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:53.399 ************************************ 00:09:53.399 END TEST lvs_grow_dirty 00:09:53.399 ************************************ 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:53.399 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:53.399 nvmf_trace.0 00:09:53.658 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:53.658 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:53.658 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:53.658 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:53.658 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:53.658 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:53.658 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:53.658 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:53.658 15:46:38 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:53.658 rmmod nvme_rdma 00:09:53.658 rmmod nvme_fabrics 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3190161 ']' 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3190161 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3190161 ']' 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3190161 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3190161 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3190161' 00:09:53.658 killing process with pid 3190161 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3190161 00:09:53.658 15:46:39 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3190161 00:09:54.593 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.593 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:09:54.593 00:09:54.593 real 0m44.668s 00:09:54.593 user 1m12.140s 00:09:54.593 sys 0m10.397s 00:09:54.594 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.594 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:54.594 ************************************ 00:09:54.594 END TEST nvmf_lvs_grow 00:09:54.594 ************************************ 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.852 ************************************ 00:09:54.852 START TEST nvmf_bdev_io_wait 00:09:54.852 ************************************ 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:54.852 * Looking for test storage... 00:09:54.852 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.852 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:54.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.853 --rc genhtml_branch_coverage=1 00:09:54.853 --rc genhtml_function_coverage=1 00:09:54.853 --rc genhtml_legend=1 00:09:54.853 --rc geninfo_all_blocks=1 00:09:54.853 --rc geninfo_unexecuted_blocks=1 00:09:54.853 00:09:54.853 ' 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:54.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.853 --rc genhtml_branch_coverage=1 00:09:54.853 --rc genhtml_function_coverage=1 00:09:54.853 --rc genhtml_legend=1 00:09:54.853 --rc geninfo_all_blocks=1 00:09:54.853 --rc geninfo_unexecuted_blocks=1 00:09:54.853 00:09:54.853 ' 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:54.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.853 --rc genhtml_branch_coverage=1 00:09:54.853 --rc genhtml_function_coverage=1 00:09:54.853 --rc genhtml_legend=1 00:09:54.853 --rc geninfo_all_blocks=1 00:09:54.853 --rc geninfo_unexecuted_blocks=1 00:09:54.853 00:09:54.853 ' 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:54.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.853 --rc genhtml_branch_coverage=1 00:09:54.853 --rc genhtml_function_coverage=1 00:09:54.853 --rc genhtml_legend=1 00:09:54.853 --rc geninfo_all_blocks=1 00:09:54.853 --rc geninfo_unexecuted_blocks=1 00:09:54.853 00:09:54.853 ' 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.853 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:55.112 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:09:55.112 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.113 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:55.113 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:55.113 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:55.113 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.113 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.113 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.113 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:55.113 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:55.113 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:55.113 15:46:40 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.679 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:01.680 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:01.680 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:01.680 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:01.680 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # rdma_device_init 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:01.680 15:46:46 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:01.680 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:01.681 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:01.681 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:01.681 altname enp217s0f0np0 00:10:01.681 altname ens818f0np0 00:10:01.681 inet 192.168.100.8/24 scope global mlx_0_0 00:10:01.681 valid_lft forever preferred_lft forever 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:01.681 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:01.681 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:01.681 altname enp217s0f1np1 00:10:01.681 altname ens818f1np1 00:10:01.681 inet 192.168.100.9/24 scope global mlx_0_1 00:10:01.681 valid_lft forever preferred_lft forever 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:01.681 192.168.100.9' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:01.681 192.168.100.9' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # head -n 1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:01.681 192.168.100.9' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # head -n 1 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # tail -n +2 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3194359 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3194359 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3194359 ']' 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.681 15:46:47 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.940 [2024-11-17 15:46:47.290049] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:01.940 [2024-11-17 15:46:47.290156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.940 [2024-11-17 15:46:47.428531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.198 [2024-11-17 15:46:47.531252] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.198 [2024-11-17 15:46:47.531301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.198 [2024-11-17 15:46:47.531314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.198 [2024-11-17 15:46:47.531328] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.198 [2024-11-17 15:46:47.531339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.198 [2024-11-17 15:46:47.533735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.198 [2024-11-17 15:46:47.533807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.198 [2024-11-17 15:46:47.533867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.198 [2024-11-17 15:46:47.533876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.765 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.023 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.023 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:03.023 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.023 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.023 [2024-11-17 15:46:48.392786] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f411e5bd940) succeed. 00:10:03.023 [2024-11-17 15:46:48.402862] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f411e579940) succeed. 00:10:03.282 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.282 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.282 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.283 Malloc0 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.283 [2024-11-17 15:46:48.770569] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3194644 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3194646 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.283 { 00:10:03.283 "params": { 00:10:03.283 "name": "Nvme$subsystem", 00:10:03.283 "trtype": "$TEST_TRANSPORT", 00:10:03.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.283 "adrfam": "ipv4", 00:10:03.283 "trsvcid": "$NVMF_PORT", 00:10:03.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.283 "hdgst": ${hdgst:-false}, 00:10:03.283 "ddgst": ${ddgst:-false} 00:10:03.283 }, 00:10:03.283 "method": "bdev_nvme_attach_controller" 00:10:03.283 } 00:10:03.283 EOF 00:10:03.283 )") 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3194648 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.283 { 00:10:03.283 "params": { 00:10:03.283 "name": "Nvme$subsystem", 00:10:03.283 "trtype": "$TEST_TRANSPORT", 00:10:03.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.283 "adrfam": "ipv4", 00:10:03.283 "trsvcid": "$NVMF_PORT", 00:10:03.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.283 "hdgst": ${hdgst:-false}, 00:10:03.283 "ddgst": ${ddgst:-false} 00:10:03.283 }, 00:10:03.283 "method": "bdev_nvme_attach_controller" 00:10:03.283 } 00:10:03.283 EOF 00:10:03.283 )") 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3194651 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.283 { 00:10:03.283 "params": { 00:10:03.283 "name": "Nvme$subsystem", 00:10:03.283 "trtype": "$TEST_TRANSPORT", 00:10:03.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.283 "adrfam": "ipv4", 00:10:03.283 "trsvcid": "$NVMF_PORT", 00:10:03.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.283 "hdgst": ${hdgst:-false}, 00:10:03.283 "ddgst": ${ddgst:-false} 00:10:03.283 }, 00:10:03.283 "method": "bdev_nvme_attach_controller" 00:10:03.283 } 00:10:03.283 EOF 00:10:03.283 )") 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:03.283 { 00:10:03.283 "params": { 00:10:03.283 "name": "Nvme$subsystem", 00:10:03.283 "trtype": "$TEST_TRANSPORT", 00:10:03.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.283 "adrfam": "ipv4", 00:10:03.283 "trsvcid": "$NVMF_PORT", 00:10:03.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.283 "hdgst": ${hdgst:-false}, 00:10:03.283 "ddgst": ${ddgst:-false} 00:10:03.283 }, 00:10:03.283 "method": "bdev_nvme_attach_controller" 00:10:03.283 } 00:10:03.283 EOF 00:10:03.283 )") 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3194644 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.283 "params": { 00:10:03.283 "name": "Nvme1", 00:10:03.283 "trtype": "rdma", 00:10:03.283 "traddr": "192.168.100.8", 00:10:03.283 "adrfam": "ipv4", 00:10:03.283 "trsvcid": "4420", 00:10:03.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.283 "hdgst": false, 00:10:03.283 "ddgst": false 00:10:03.283 }, 00:10:03.283 "method": "bdev_nvme_attach_controller" 00:10:03.283 }' 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.283 "params": { 00:10:03.283 "name": "Nvme1", 00:10:03.283 "trtype": "rdma", 00:10:03.283 "traddr": "192.168.100.8", 00:10:03.283 "adrfam": "ipv4", 00:10:03.283 "trsvcid": "4420", 00:10:03.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.283 "hdgst": false, 00:10:03.283 "ddgst": false 00:10:03.283 }, 00:10:03.283 "method": "bdev_nvme_attach_controller" 00:10:03.283 }' 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:03.283 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.283 "params": { 00:10:03.283 "name": "Nvme1", 00:10:03.284 "trtype": "rdma", 00:10:03.284 "traddr": "192.168.100.8", 00:10:03.284 "adrfam": "ipv4", 00:10:03.284 "trsvcid": "4420", 00:10:03.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.284 "hdgst": false, 00:10:03.284 "ddgst": false 00:10:03.284 }, 00:10:03.284 "method": "bdev_nvme_attach_controller" 00:10:03.284 }' 00:10:03.284 15:46:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:03.284 "params": { 00:10:03.284 "name": "Nvme1", 00:10:03.284 "trtype": "rdma", 00:10:03.284 "traddr": "192.168.100.8", 00:10:03.284 "adrfam": "ipv4", 00:10:03.284 "trsvcid": "4420", 00:10:03.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.284 "hdgst": false, 00:10:03.284 "ddgst": false 00:10:03.284 }, 00:10:03.284 "method": "bdev_nvme_attach_controller" 00:10:03.284 }' 00:10:03.543 [2024-11-17 15:46:48.860100] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:03.543 [2024-11-17 15:46:48.860099] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:03.543 [2024-11-17 15:46:48.860196] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-17 15:46:48.860196] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:03.543 --proc-type=auto ] 00:10:03.543 [2024-11-17 15:46:48.860263] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:03.543 [2024-11-17 15:46:48.860337] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:03.543 [2024-11-17 15:46:48.864260] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:03.543 [2024-11-17 15:46:48.864351] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:03.802 [2024-11-17 15:46:49.109888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.803 [2024-11-17 15:46:49.207756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:03.803 [2024-11-17 15:46:49.210935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.803 [2024-11-17 15:46:49.306959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:03.803 [2024-11-17 15:46:49.313961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.063 [2024-11-17 15:46:49.378382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.063 [2024-11-17 15:46:49.413789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:04.063 [2024-11-17 15:46:49.476242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:04.063 Running I/O for 1 seconds... 00:10:04.321 Running I/O for 1 seconds... 00:10:04.321 Running I/O for 1 seconds... 00:10:04.321 Running I/O for 1 seconds... 00:10:05.256 19360.00 IOPS, 75.62 MiB/s 00:10:05.256 Latency(us) 00:10:05.256 [2024-11-17T14:46:50.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.256 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:05.256 Nvme1n1 : 1.01 19398.53 75.78 0.00 0.00 6579.01 4089.45 16672.36 00:10:05.256 [2024-11-17T14:46:50.800Z] =================================================================================================================== 00:10:05.256 [2024-11-17T14:46:50.800Z] Total : 19398.53 75.78 0.00 0.00 6579.01 4089.45 16672.36 00:10:05.256 13837.00 IOPS, 54.05 MiB/s 00:10:05.256 Latency(us) 00:10:05.256 [2024-11-17T14:46:50.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.256 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:05.256 Nvme1n1 : 1.01 13883.49 54.23 0.00 0.00 9186.40 5347.74 24956.11 00:10:05.256 [2024-11-17T14:46:50.800Z] =================================================================================================================== 00:10:05.256 [2024-11-17T14:46:50.800Z] Total : 13883.49 54.23 0.00 0.00 9186.40 5347.74 24956.11 00:10:05.256 14811.00 IOPS, 57.86 MiB/s 00:10:05.256 Latency(us) 00:10:05.256 [2024-11-17T14:46:50.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.256 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:05.256 Nvme1n1 : 1.01 14888.86 58.16 0.00 0.00 8574.10 3722.44 25480.40 00:10:05.256 [2024-11-17T14:46:50.800Z] =================================================================================================================== 00:10:05.256 [2024-11-17T14:46:50.800Z] Total : 14888.86 58.16 0.00 0.00 8574.10 3722.44 25480.40 00:10:05.515 234968.00 IOPS, 917.84 MiB/s 00:10:05.515 Latency(us) 00:10:05.515 [2024-11-17T14:46:51.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.515 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:05.515 Nvme1n1 : 1.00 234590.57 916.37 0.00 0.00 542.71 249.04 2752.51 00:10:05.515 [2024-11-17T14:46:51.059Z] =================================================================================================================== 00:10:05.515 [2024-11-17T14:46:51.059Z] Total : 234590.57 916.37 0.00 0.00 542.71 249.04 2752.51 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3194646 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3194648 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3194651 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:06.082 rmmod nvme_rdma 00:10:06.082 rmmod nvme_fabrics 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:06.082 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3194359 ']' 00:10:06.083 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3194359 00:10:06.083 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3194359 ']' 00:10:06.083 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3194359 00:10:06.083 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:06.083 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.083 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3194359 00:10:06.342 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.342 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.342 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3194359' 00:10:06.342 killing process with pid 3194359 00:10:06.342 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3194359 00:10:06.342 15:46:51 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3194359 00:10:07.720 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.720 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:07.720 00:10:07.720 real 0m12.995s 00:10:07.720 user 0m31.225s 00:10:07.720 sys 0m7.175s 00:10:07.720 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.720 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.720 ************************************ 00:10:07.720 END TEST nvmf_bdev_io_wait 00:10:07.720 ************************************ 00:10:07.720 15:46:53 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:07.720 15:46:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.720 15:46:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.720 15:46:53 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.980 ************************************ 00:10:07.980 START TEST nvmf_queue_depth 00:10:07.980 ************************************ 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:10:07.980 * Looking for test storage... 00:10:07.980 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:07.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.980 --rc genhtml_branch_coverage=1 00:10:07.980 --rc genhtml_function_coverage=1 00:10:07.980 --rc genhtml_legend=1 00:10:07.980 --rc geninfo_all_blocks=1 00:10:07.980 --rc geninfo_unexecuted_blocks=1 00:10:07.980 00:10:07.980 ' 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:07.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.980 --rc genhtml_branch_coverage=1 00:10:07.980 --rc genhtml_function_coverage=1 00:10:07.980 --rc genhtml_legend=1 00:10:07.980 --rc geninfo_all_blocks=1 00:10:07.980 --rc geninfo_unexecuted_blocks=1 00:10:07.980 00:10:07.980 ' 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:07.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.980 --rc genhtml_branch_coverage=1 00:10:07.980 --rc genhtml_function_coverage=1 00:10:07.980 --rc genhtml_legend=1 00:10:07.980 --rc geninfo_all_blocks=1 00:10:07.980 --rc geninfo_unexecuted_blocks=1 00:10:07.980 00:10:07.980 ' 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:07.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:07.980 --rc genhtml_branch_coverage=1 00:10:07.980 --rc genhtml_function_coverage=1 00:10:07.980 --rc genhtml_legend=1 00:10:07.980 --rc geninfo_all_blocks=1 00:10:07.980 --rc geninfo_unexecuted_blocks=1 00:10:07.980 00:10:07.980 ' 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.980 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:07.981 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:07.981 15:46:53 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:14.842 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:14.842 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:14.842 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:14.842 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # rdma_device_init 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:14.842 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:14.843 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:14.843 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:14.843 altname enp217s0f0np0 00:10:14.843 altname ens818f0np0 00:10:14.843 inet 192.168.100.8/24 scope global mlx_0_0 00:10:14.843 valid_lft forever preferred_lft forever 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:14.843 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:14.843 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:14.843 altname enp217s0f1np1 00:10:14.843 altname ens818f1np1 00:10:14.843 inet 192.168.100.9/24 scope global mlx_0_1 00:10:14.843 valid_lft forever preferred_lft forever 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:14.843 192.168.100.9' 00:10:14.843 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:14.843 192.168.100.9' 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # head -n 1 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:15.102 192.168.100.9' 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # tail -n +2 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # head -n 1 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3198905 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3198905 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3198905 ']' 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.102 15:47:00 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.102 [2024-11-17 15:47:00.529198] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:15.102 [2024-11-17 15:47:00.529296] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.362 [2024-11-17 15:47:00.670080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.362 [2024-11-17 15:47:00.772438] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.362 [2024-11-17 15:47:00.772482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.362 [2024-11-17 15:47:00.772495] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.362 [2024-11-17 15:47:00.772509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.362 [2024-11-17 15:47:00.772519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.362 [2024-11-17 15:47:00.773785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:15.930 [2024-11-17 15:47:01.388683] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f76d05bd940) succeed. 00:10:15.930 [2024-11-17 15:47:01.398008] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f76d0579940) succeed. 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.930 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.189 Malloc0 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.190 [2024-11-17 15:47:01.572980] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3199190 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3199190 /var/tmp/bdevperf.sock 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3199190 ']' 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:16.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.190 15:47:01 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.190 [2024-11-17 15:47:01.659484] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:16.190 [2024-11-17 15:47:01.659576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3199190 ] 00:10:16.449 [2024-11-17 15:47:01.791714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.449 [2024-11-17 15:47:01.890888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.017 15:47:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.017 15:47:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:17.017 15:47:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:17.017 15:47:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.017 15:47:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:17.017 NVMe0n1 00:10:17.017 15:47:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.017 15:47:02 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:17.277 Running I/O for 10 seconds... 00:10:19.152 15360.00 IOPS, 60.00 MiB/s [2024-11-17T14:47:06.075Z] 15575.00 IOPS, 60.84 MiB/s [2024-11-17T14:47:07.012Z] 15701.33 IOPS, 61.33 MiB/s [2024-11-17T14:47:07.957Z] 15722.00 IOPS, 61.41 MiB/s [2024-11-17T14:47:08.892Z] 15769.60 IOPS, 61.60 MiB/s [2024-11-17T14:47:09.829Z] 15836.83 IOPS, 61.86 MiB/s [2024-11-17T14:47:10.767Z] 15798.86 IOPS, 61.71 MiB/s [2024-11-17T14:47:11.706Z] 15782.62 IOPS, 61.65 MiB/s [2024-11-17T14:47:13.085Z] 15815.11 IOPS, 61.78 MiB/s [2024-11-17T14:47:13.085Z] 15836.60 IOPS, 61.86 MiB/s 00:10:27.541 Latency(us) 00:10:27.541 [2024-11-17T14:47:13.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.541 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:27.541 Verification LBA range: start 0x0 length 0x4000 00:10:27.541 NVMe0n1 : 10.04 15860.97 61.96 0.00 0.00 64344.97 14680.06 40894.46 00:10:27.541 [2024-11-17T14:47:13.085Z] =================================================================================================================== 00:10:27.541 [2024-11-17T14:47:13.085Z] Total : 15860.97 61.96 0.00 0.00 64344.97 14680.06 40894.46 00:10:27.541 { 00:10:27.541 "results": [ 00:10:27.541 { 00:10:27.541 "job": "NVMe0n1", 00:10:27.541 "core_mask": "0x1", 00:10:27.541 "workload": "verify", 00:10:27.541 "status": "finished", 00:10:27.541 "verify_range": { 00:10:27.541 "start": 0, 00:10:27.541 "length": 16384 00:10:27.541 }, 00:10:27.541 "queue_depth": 1024, 00:10:27.541 "io_size": 4096, 00:10:27.541 "runtime": 10.042515, 00:10:27.541 "iops": 15860.967098381232, 00:10:27.541 "mibps": 61.95690272805169, 00:10:27.541 "io_failed": 0, 00:10:27.541 "io_timeout": 0, 00:10:27.541 "avg_latency_us": 64344.969232354786, 00:10:27.541 "min_latency_us": 14680.064, 00:10:27.541 "max_latency_us": 40894.464 00:10:27.541 } 00:10:27.541 ], 00:10:27.541 "core_count": 1 00:10:27.541 } 00:10:27.541 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3199190 00:10:27.541 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3199190 ']' 00:10:27.541 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3199190 00:10:27.541 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:27.541 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.542 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3199190 00:10:27.542 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.542 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.542 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3199190' 00:10:27.542 killing process with pid 3199190 00:10:27.542 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3199190 00:10:27.542 Received shutdown signal, test time was about 10.000000 seconds 00:10:27.542 00:10:27.542 Latency(us) 00:10:27.542 [2024-11-17T14:47:13.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.542 [2024-11-17T14:47:13.086Z] =================================================================================================================== 00:10:27.542 [2024-11-17T14:47:13.086Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:27.542 15:47:12 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3199190 00:10:28.111 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:28.111 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:28.111 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.111 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:28.111 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:28.111 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:28.111 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:28.111 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.111 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:28.370 rmmod nvme_rdma 00:10:28.370 rmmod nvme_fabrics 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3198905 ']' 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3198905 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3198905 ']' 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3198905 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3198905 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3198905' 00:10:28.370 killing process with pid 3198905 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3198905 00:10:28.370 15:47:13 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3198905 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:29.748 00:10:29.748 real 0m21.826s 00:10:29.748 user 0m28.686s 00:10:29.748 sys 0m6.297s 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:29.748 ************************************ 00:10:29.748 END TEST nvmf_queue_depth 00:10:29.748 ************************************ 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.748 ************************************ 00:10:29.748 START TEST nvmf_target_multipath 00:10:29.748 ************************************ 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:29.748 * Looking for test storage... 00:10:29.748 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:29.748 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:30.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.008 --rc genhtml_branch_coverage=1 00:10:30.008 --rc genhtml_function_coverage=1 00:10:30.008 --rc genhtml_legend=1 00:10:30.008 --rc geninfo_all_blocks=1 00:10:30.008 --rc geninfo_unexecuted_blocks=1 00:10:30.008 00:10:30.008 ' 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:30.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.008 --rc genhtml_branch_coverage=1 00:10:30.008 --rc genhtml_function_coverage=1 00:10:30.008 --rc genhtml_legend=1 00:10:30.008 --rc geninfo_all_blocks=1 00:10:30.008 --rc geninfo_unexecuted_blocks=1 00:10:30.008 00:10:30.008 ' 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:30.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.008 --rc genhtml_branch_coverage=1 00:10:30.008 --rc genhtml_function_coverage=1 00:10:30.008 --rc genhtml_legend=1 00:10:30.008 --rc geninfo_all_blocks=1 00:10:30.008 --rc geninfo_unexecuted_blocks=1 00:10:30.008 00:10:30.008 ' 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:30.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:30.008 --rc genhtml_branch_coverage=1 00:10:30.008 --rc genhtml_function_coverage=1 00:10:30.008 --rc genhtml_legend=1 00:10:30.008 --rc geninfo_all_blocks=1 00:10:30.008 --rc geninfo_unexecuted_blocks=1 00:10:30.008 00:10:30.008 ' 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:30.008 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:30.009 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:30.009 15:47:15 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:36.582 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:36.582 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:36.582 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:36.582 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # rdma_device_init 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.582 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:36.583 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.583 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:36.583 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:36.583 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:36.583 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:36.583 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:36.583 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:36.583 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:36.583 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.583 15:47:21 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:36.583 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:36.583 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:36.583 altname enp217s0f0np0 00:10:36.583 altname ens818f0np0 00:10:36.583 inet 192.168.100.8/24 scope global mlx_0_0 00:10:36.583 valid_lft forever preferred_lft forever 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:36.583 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:36.583 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:36.583 altname enp217s0f1np1 00:10:36.583 altname ens818f1np1 00:10:36.583 inet 192.168.100.9/24 scope global mlx_0_1 00:10:36.583 valid_lft forever preferred_lft forever 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:36.583 192.168.100.9' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:36.583 192.168.100.9' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # head -n 1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:36.583 192.168.100.9' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # head -n 1 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # tail -n +2 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:36.583 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:10:36.843 run this test only with TCP transport for now 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:36.843 rmmod nvme_rdma 00:10:36.843 rmmod nvme_fabrics 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:36.843 00:10:36.843 real 0m7.055s 00:10:36.843 user 0m2.020s 00:10:36.843 sys 0m5.240s 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:36.843 ************************************ 00:10:36.843 END TEST nvmf_target_multipath 00:10:36.843 ************************************ 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.843 ************************************ 00:10:36.843 START TEST nvmf_zcopy 00:10:36.843 ************************************ 00:10:36.843 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:37.104 * Looking for test storage... 00:10:37.104 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.104 --rc genhtml_branch_coverage=1 00:10:37.104 --rc genhtml_function_coverage=1 00:10:37.104 --rc genhtml_legend=1 00:10:37.104 --rc geninfo_all_blocks=1 00:10:37.104 --rc geninfo_unexecuted_blocks=1 00:10:37.104 00:10:37.104 ' 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.104 --rc genhtml_branch_coverage=1 00:10:37.104 --rc genhtml_function_coverage=1 00:10:37.104 --rc genhtml_legend=1 00:10:37.104 --rc geninfo_all_blocks=1 00:10:37.104 --rc geninfo_unexecuted_blocks=1 00:10:37.104 00:10:37.104 ' 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.104 --rc genhtml_branch_coverage=1 00:10:37.104 --rc genhtml_function_coverage=1 00:10:37.104 --rc genhtml_legend=1 00:10:37.104 --rc geninfo_all_blocks=1 00:10:37.104 --rc geninfo_unexecuted_blocks=1 00:10:37.104 00:10:37.104 ' 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.104 --rc genhtml_branch_coverage=1 00:10:37.104 --rc genhtml_function_coverage=1 00:10:37.104 --rc genhtml_legend=1 00:10:37.104 --rc geninfo_all_blocks=1 00:10:37.104 --rc geninfo_unexecuted_blocks=1 00:10:37.104 00:10:37.104 ' 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.104 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.105 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.105 15:47:22 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:43.689 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:43.689 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:43.689 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.689 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:43.690 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # rdma_device_init 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:43.690 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:43.690 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:43.690 altname enp217s0f0np0 00:10:43.690 altname ens818f0np0 00:10:43.690 inet 192.168.100.8/24 scope global mlx_0_0 00:10:43.690 valid_lft forever preferred_lft forever 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:43.690 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:43.690 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:43.690 altname enp217s0f1np1 00:10:43.690 altname ens818f1np1 00:10:43.690 inet 192.168.100.9/24 scope global mlx_0_1 00:10:43.690 valid_lft forever preferred_lft forever 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:43.690 15:47:28 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:43.690 192.168.100.9' 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:43.690 192.168.100.9' 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # head -n 1 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:43.690 192.168.100.9' 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # tail -n +2 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # head -n 1 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:43.690 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3208041 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3208041 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3208041 ']' 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.691 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.691 [2024-11-17 15:47:29.204523] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:43.691 [2024-11-17 15:47:29.204620] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.950 [2024-11-17 15:47:29.339295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.950 [2024-11-17 15:47:29.438201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.950 [2024-11-17 15:47:29.438246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.950 [2024-11-17 15:47:29.438259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.950 [2024-11-17 15:47:29.438272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.950 [2024-11-17 15:47:29.438282] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.950 [2024-11-17 15:47:29.439657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.533 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.533 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:44.533 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.533 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.533 15:47:29 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:10:44.533 Unsupported transport: rdma 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@812 -- # type=--id 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@813 -- # id=0 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:44.533 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:44.533 nvmf_trace.0 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@827 -- # return 0 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:44.791 rmmod nvme_rdma 00:10:44.791 rmmod nvme_fabrics 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3208041 ']' 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3208041 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3208041 ']' 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3208041 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208041 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208041' 00:10:44.791 killing process with pid 3208041 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3208041 00:10:44.791 15:47:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3208041 00:10:45.793 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.793 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:10:45.793 00:10:45.793 real 0m8.941s 00:10:45.793 user 0m4.085s 00:10:45.793 sys 0m5.533s 00:10:45.793 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.793 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:45.793 ************************************ 00:10:45.793 END TEST nvmf_zcopy 00:10:45.793 ************************************ 00:10:45.793 15:47:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:45.793 15:47:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:45.793 15:47:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.793 15:47:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:45.793 ************************************ 00:10:45.793 START TEST nvmf_nmic 00:10:45.793 ************************************ 00:10:45.793 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:46.053 * Looking for test storage... 00:10:46.053 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.053 --rc genhtml_branch_coverage=1 00:10:46.053 --rc genhtml_function_coverage=1 00:10:46.053 --rc genhtml_legend=1 00:10:46.053 --rc geninfo_all_blocks=1 00:10:46.053 --rc geninfo_unexecuted_blocks=1 00:10:46.053 00:10:46.053 ' 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.053 --rc genhtml_branch_coverage=1 00:10:46.053 --rc genhtml_function_coverage=1 00:10:46.053 --rc genhtml_legend=1 00:10:46.053 --rc geninfo_all_blocks=1 00:10:46.053 --rc geninfo_unexecuted_blocks=1 00:10:46.053 00:10:46.053 ' 00:10:46.053 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.053 --rc genhtml_branch_coverage=1 00:10:46.053 --rc genhtml_function_coverage=1 00:10:46.053 --rc genhtml_legend=1 00:10:46.053 --rc geninfo_all_blocks=1 00:10:46.054 --rc geninfo_unexecuted_blocks=1 00:10:46.054 00:10:46.054 ' 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.054 --rc genhtml_branch_coverage=1 00:10:46.054 --rc genhtml_function_coverage=1 00:10:46.054 --rc genhtml_legend=1 00:10:46.054 --rc geninfo_all_blocks=1 00:10:46.054 --rc geninfo_unexecuted_blocks=1 00:10:46.054 00:10:46.054 ' 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.054 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.054 15:47:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:10:52.621 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:10:52.621 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:10:52.621 Found net devices under 0000:d9:00.0: mlx_0_0 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:10:52.621 Found net devices under 0000:d9:00.1: mlx_0_1 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # rdma_device_init 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@530 -- # allocate_nic_ips 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:52.621 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:52.622 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.622 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:10:52.622 altname enp217s0f0np0 00:10:52.622 altname ens818f0np0 00:10:52.622 inet 192.168.100.8/24 scope global mlx_0_0 00:10:52.622 valid_lft forever preferred_lft forever 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:52.622 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:52.622 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:10:52.622 altname enp217s0f1np1 00:10:52.622 altname ens818f1np1 00:10:52.622 inet 192.168.100.9/24 scope global mlx_0_1 00:10:52.622 valid_lft forever preferred_lft forever 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:52.622 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:10:52.881 192.168.100.9' 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:10:52.881 192.168.100.9' 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # head -n 1 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:10:52.881 192.168.100.9' 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # tail -n +2 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # head -n 1 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3211680 00:10:52.881 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3211680 00:10:52.882 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:52.882 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3211680 ']' 00:10:52.882 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.882 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:52.882 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.882 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:52.882 15:47:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:52.882 [2024-11-17 15:47:38.364340] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:52.882 [2024-11-17 15:47:38.364432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.140 [2024-11-17 15:47:38.499723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.140 [2024-11-17 15:47:38.603517] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.140 [2024-11-17 15:47:38.603562] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.140 [2024-11-17 15:47:38.603579] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.140 [2024-11-17 15:47:38.603595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.140 [2024-11-17 15:47:38.603604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.140 [2024-11-17 15:47:38.606043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.140 [2024-11-17 15:47:38.606061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.140 [2024-11-17 15:47:38.606157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.140 [2024-11-17 15:47:38.606165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.705 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.705 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:53.705 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.705 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.705 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.705 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.705 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:53.705 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.705 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:53.705 [2024-11-17 15:47:39.240578] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f2e25dbd940) succeed. 00:10:53.964 [2024-11-17 15:47:39.250046] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f2e25d79940) succeed. 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.222 Malloc0 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.222 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.223 [2024-11-17 15:47:39.624330] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:54.223 test case1: single bdev can't be used in multiple subsystems 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.223 [2024-11-17 15:47:39.652105] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:54.223 [2024-11-17 15:47:39.652136] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:54.223 [2024-11-17 15:47:39.652150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.223 request: 00:10:54.223 { 00:10:54.223 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:54.223 "namespace": { 00:10:54.223 "bdev_name": "Malloc0", 00:10:54.223 "no_auto_visible": false 00:10:54.223 }, 00:10:54.223 "method": "nvmf_subsystem_add_ns", 00:10:54.223 "req_id": 1 00:10:54.223 } 00:10:54.223 Got JSON-RPC error response 00:10:54.223 response: 00:10:54.223 { 00:10:54.223 "code": -32602, 00:10:54.223 "message": "Invalid parameters" 00:10:54.223 } 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:54.223 Adding namespace failed - expected result. 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:54.223 test case2: host connect to nvmf target in multiple paths 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:54.223 [2024-11-17 15:47:39.668176] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.223 15:47:39 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:55.160 15:47:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:10:56.537 15:47:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.537 15:47:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:56.537 15:47:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.537 15:47:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:56.537 15:47:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:58.457 15:47:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:58.457 15:47:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:58.457 15:47:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.457 15:47:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:58.457 15:47:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.457 15:47:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:58.457 15:47:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:58.457 [global] 00:10:58.457 thread=1 00:10:58.457 invalidate=1 00:10:58.457 rw=write 00:10:58.457 time_based=1 00:10:58.457 runtime=1 00:10:58.457 ioengine=libaio 00:10:58.457 direct=1 00:10:58.457 bs=4096 00:10:58.457 iodepth=1 00:10:58.457 norandommap=0 00:10:58.457 numjobs=1 00:10:58.457 00:10:58.457 verify_dump=1 00:10:58.457 verify_backlog=512 00:10:58.457 verify_state_save=0 00:10:58.457 do_verify=1 00:10:58.457 verify=crc32c-intel 00:10:58.457 [job0] 00:10:58.457 filename=/dev/nvme0n1 00:10:58.457 Could not set queue depth (nvme0n1) 00:10:58.721 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.721 fio-3.35 00:10:58.721 Starting 1 thread 00:10:59.654 00:10:59.654 job0: (groupid=0, jobs=1): err= 0: pid=3212881: Sun Nov 17 15:47:45 2024 00:10:59.654 read: IOPS=6405, BW=25.0MiB/s (26.2MB/s)(25.0MiB/1001msec) 00:10:59.654 slat (nsec): min=8212, max=28705, avg=8731.61, stdev=672.39 00:10:59.654 clat (usec): min=53, max=110, avg=64.75, stdev= 4.04 00:10:59.654 lat (usec): min=63, max=119, avg=73.48, stdev= 4.09 00:10:59.654 clat percentiles (usec): 00:10:59.654 | 1.00th=[ 58], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 62], 00:10:59.654 | 30.00th=[ 63], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 67], 00:10:59.654 | 70.00th=[ 68], 80.00th=[ 69], 90.00th=[ 71], 95.00th=[ 72], 00:10:59.654 | 99.00th=[ 76], 99.50th=[ 77], 99.90th=[ 84], 99.95th=[ 86], 00:10:59.654 | 99.99th=[ 112] 00:10:59.654 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:10:59.654 slat (nsec): min=8610, max=36452, avg=11603.09, stdev=1098.21 00:10:59.654 clat (usec): min=48, max=128, avg=62.66, stdev= 4.20 00:10:59.654 lat (usec): min=61, max=162, avg=74.26, stdev= 4.38 00:10:59.654 clat percentiles (usec): 00:10:59.654 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 58], 20.00th=[ 60], 00:10:59.654 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:10:59.654 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 71], 00:10:59.654 | 99.00th=[ 74], 99.50th=[ 75], 99.90th=[ 82], 99.95th=[ 85], 00:10:59.654 | 99.99th=[ 129] 00:10:59.654 bw ( KiB/s): min=27680, max=27680, per=100.00%, avg=27680.00, stdev= 0.00, samples=1 00:10:59.654 iops : min= 6920, max= 6920, avg=6920.00, stdev= 0.00, samples=1 00:10:59.654 lat (usec) : 50=0.01%, 100=99.98%, 250=0.02% 00:10:59.654 cpu : usr=12.20%, sys=15.30%, ctx=13068, majf=0, minf=1 00:10:59.654 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.654 issued rwts: total=6412,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.654 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.654 00:10:59.654 Run status group 0 (all jobs): 00:10:59.654 READ: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=25.0MiB (26.3MB), run=1001-1001msec 00:10:59.654 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=26.0MiB (27.3MB), run=1001-1001msec 00:10:59.654 00:10:59.654 Disk stats (read/write): 00:10:59.654 nvme0n1: ios=5682/6105, merge=0/0, ticks=339/319, in_queue=658, util=90.58% 00:10:59.912 15:47:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:01.809 rmmod nvme_rdma 00:11:01.809 rmmod nvme_fabrics 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3211680 ']' 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3211680 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3211680 ']' 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3211680 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3211680 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3211680' 00:11:01.809 killing process with pid 3211680 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3211680 00:11:01.809 15:47:47 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3211680 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:03.709 00:11:03.709 real 0m17.719s 00:11:03.709 user 0m50.272s 00:11:03.709 sys 0m6.374s 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:03.709 ************************************ 00:11:03.709 END TEST nvmf_nmic 00:11:03.709 ************************************ 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.709 ************************************ 00:11:03.709 START TEST nvmf_fio_target 00:11:03.709 ************************************ 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:11:03.709 * Looking for test storage... 00:11:03.709 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:03.709 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:03.967 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:03.967 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.967 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.967 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.967 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:03.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.968 --rc genhtml_branch_coverage=1 00:11:03.968 --rc genhtml_function_coverage=1 00:11:03.968 --rc genhtml_legend=1 00:11:03.968 --rc geninfo_all_blocks=1 00:11:03.968 --rc geninfo_unexecuted_blocks=1 00:11:03.968 00:11:03.968 ' 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:03.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.968 --rc genhtml_branch_coverage=1 00:11:03.968 --rc genhtml_function_coverage=1 00:11:03.968 --rc genhtml_legend=1 00:11:03.968 --rc geninfo_all_blocks=1 00:11:03.968 --rc geninfo_unexecuted_blocks=1 00:11:03.968 00:11:03.968 ' 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:03.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.968 --rc genhtml_branch_coverage=1 00:11:03.968 --rc genhtml_function_coverage=1 00:11:03.968 --rc genhtml_legend=1 00:11:03.968 --rc geninfo_all_blocks=1 00:11:03.968 --rc geninfo_unexecuted_blocks=1 00:11:03.968 00:11:03.968 ' 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:03.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.968 --rc genhtml_branch_coverage=1 00:11:03.968 --rc genhtml_function_coverage=1 00:11:03.968 --rc genhtml_legend=1 00:11:03.968 --rc geninfo_all_blocks=1 00:11:03.968 --rc geninfo_unexecuted_blocks=1 00:11:03.968 00:11:03.968 ' 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.968 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:03.968 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.969 15:47:49 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.528 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.528 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:10.529 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:10.529 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:10.529 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:10.529 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # rdma_device_init 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.529 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:10.530 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:10.530 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:10.530 altname enp217s0f0np0 00:11:10.530 altname ens818f0np0 00:11:10.530 inet 192.168.100.8/24 scope global mlx_0_0 00:11:10.530 valid_lft forever preferred_lft forever 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:10.530 15:47:55 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:10.530 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:10.530 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:10.530 altname enp217s0f1np1 00:11:10.530 altname ens818f1np1 00:11:10.530 inet 192.168.100.9/24 scope global mlx_0_1 00:11:10.530 valid_lft forever preferred_lft forever 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:10.530 192.168.100.9' 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # head -n 1 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:10.530 192.168.100.9' 00:11:10.530 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # head -n 1 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:10.789 192.168.100.9' 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # tail -n +2 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3217115 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3217115 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3217115 ']' 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.789 15:47:56 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.789 [2024-11-17 15:47:56.208238] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:10.789 [2024-11-17 15:47:56.208368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.047 [2024-11-17 15:47:56.343236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.047 [2024-11-17 15:47:56.440473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.047 [2024-11-17 15:47:56.440524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.048 [2024-11-17 15:47:56.440537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.048 [2024-11-17 15:47:56.440550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.048 [2024-11-17 15:47:56.440560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.048 [2024-11-17 15:47:56.442986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.048 [2024-11-17 15:47:56.443057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.048 [2024-11-17 15:47:56.443116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.048 [2024-11-17 15:47:56.443124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.612 15:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.612 15:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:11.612 15:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:11.612 15:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:11.612 15:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.612 15:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.612 15:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:11.869 [2024-11-17 15:47:57.260194] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f77c8fbd940) succeed. 00:11:11.869 [2024-11-17 15:47:57.269498] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f77c8f79940) succeed. 00:11:12.126 15:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.384 15:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:12.384 15:47:57 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.642 15:47:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:12.642 15:47:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:12.900 15:47:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:12.900 15:47:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.158 15:47:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:13.158 15:47:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:13.415 15:47:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.673 15:47:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:13.673 15:47:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:13.929 15:47:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:13.929 15:47:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:14.186 15:47:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:14.186 15:47:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:14.444 15:47:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.700 15:47:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.700 15:47:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.700 15:48:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:14.700 15:48:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.957 15:48:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:15.214 [2024-11-17 15:48:00.585201] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:15.214 15:48:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:15.471 15:48:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:15.728 15:48:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:16.661 15:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:16.661 15:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:16.661 15:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.661 15:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:16.661 15:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:16.661 15:48:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:18.558 15:48:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:18.558 15:48:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:18.558 15:48:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.558 15:48:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:18.558 15:48:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.558 15:48:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:18.558 15:48:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:18.558 [global] 00:11:18.558 thread=1 00:11:18.558 invalidate=1 00:11:18.558 rw=write 00:11:18.558 time_based=1 00:11:18.558 runtime=1 00:11:18.558 ioengine=libaio 00:11:18.558 direct=1 00:11:18.558 bs=4096 00:11:18.558 iodepth=1 00:11:18.558 norandommap=0 00:11:18.558 numjobs=1 00:11:18.558 00:11:18.558 verify_dump=1 00:11:18.558 verify_backlog=512 00:11:18.558 verify_state_save=0 00:11:18.558 do_verify=1 00:11:18.558 verify=crc32c-intel 00:11:18.558 [job0] 00:11:18.558 filename=/dev/nvme0n1 00:11:18.558 [job1] 00:11:18.558 filename=/dev/nvme0n2 00:11:18.558 [job2] 00:11:18.558 filename=/dev/nvme0n3 00:11:18.558 [job3] 00:11:18.558 filename=/dev/nvme0n4 00:11:18.841 Could not set queue depth (nvme0n1) 00:11:18.841 Could not set queue depth (nvme0n2) 00:11:18.841 Could not set queue depth (nvme0n3) 00:11:18.841 Could not set queue depth (nvme0n4) 00:11:19.107 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.107 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.107 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.107 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:19.107 fio-3.35 00:11:19.107 Starting 4 threads 00:11:20.491 00:11:20.491 job0: (groupid=0, jobs=1): err= 0: pid=3218942: Sun Nov 17 15:48:05 2024 00:11:20.491 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:11:20.491 slat (nsec): min=8372, max=23199, avg=8962.31, stdev=704.54 00:11:20.491 clat (usec): min=72, max=224, avg=117.00, stdev=38.92 00:11:20.491 lat (usec): min=81, max=233, avg=125.97, stdev=38.99 00:11:20.491 clat percentiles (usec): 00:11:20.491 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:11:20.491 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 91], 60.00th=[ 141], 00:11:20.491 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 174], 00:11:20.491 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 217], 99.95th=[ 221], 00:11:20.491 | 99.99th=[ 225] 00:11:20.491 write: IOPS=4023, BW=15.7MiB/s (16.5MB/s)(15.7MiB/1001msec); 0 zone resets 00:11:20.491 slat (nsec): min=10614, max=47312, avg=11812.18, stdev=1174.22 00:11:20.491 clat (usec): min=70, max=255, avg=119.38, stdev=41.39 00:11:20.491 lat (usec): min=81, max=268, avg=131.19, stdev=41.49 00:11:20.491 clat percentiles (usec): 00:11:20.491 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 80], 00:11:20.491 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 92], 60.00th=[ 153], 00:11:20.491 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 174], 00:11:20.491 | 99.00th=[ 198], 99.50th=[ 210], 99.90th=[ 221], 99.95th=[ 227], 00:11:20.491 | 99.99th=[ 255] 00:11:20.491 bw ( KiB/s): min=20480, max=20480, per=33.90%, avg=20480.00, stdev= 0.00, samples=1 00:11:20.491 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:20.491 lat (usec) : 100=55.16%, 250=44.82%, 500=0.01% 00:11:20.491 cpu : usr=7.30%, sys=9.20%, ctx=7612, majf=0, minf=1 00:11:20.491 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.491 issued rwts: total=3584,4028,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.491 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.491 job1: (groupid=0, jobs=1): err= 0: pid=3218944: Sun Nov 17 15:48:05 2024 00:11:20.491 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:11:20.491 slat (nsec): min=8365, max=59248, avg=10408.41, stdev=3101.44 00:11:20.491 clat (usec): min=79, max=250, avg=152.65, stdev=20.48 00:11:20.491 lat (usec): min=88, max=260, avg=163.06, stdev=21.04 00:11:20.491 clat percentiles (usec): 00:11:20.491 | 1.00th=[ 96], 5.00th=[ 124], 10.00th=[ 130], 20.00th=[ 137], 00:11:20.491 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 159], 00:11:20.491 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 184], 00:11:20.491 | 99.00th=[ 215], 99.50th=[ 219], 99.90th=[ 229], 99.95th=[ 249], 00:11:20.491 | 99.99th=[ 251] 00:11:20.491 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:20.491 slat (nsec): min=10045, max=48362, avg=12674.59, stdev=2918.80 00:11:20.491 clat (usec): min=76, max=462, avg=145.78, stdev=24.03 00:11:20.491 lat (usec): min=87, max=474, avg=158.45, stdev=24.72 00:11:20.491 clat percentiles (usec): 00:11:20.491 | 1.00th=[ 92], 5.00th=[ 114], 10.00th=[ 121], 20.00th=[ 127], 00:11:20.491 | 30.00th=[ 131], 40.00th=[ 137], 50.00th=[ 145], 60.00th=[ 153], 00:11:20.491 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 186], 00:11:20.491 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 231], 99.95th=[ 251], 00:11:20.491 | 99.99th=[ 461] 00:11:20.491 bw ( KiB/s): min=14264, max=14264, per=23.61%, avg=14264.00, stdev= 0.00, samples=1 00:11:20.491 iops : min= 3566, max= 3566, avg=3566.00, stdev= 0.00, samples=1 00:11:20.491 lat (usec) : 100=1.64%, 250=98.31%, 500=0.05% 00:11:20.491 cpu : usr=5.10%, sys=8.50%, ctx=6113, majf=0, minf=1 00:11:20.491 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.491 issued rwts: total=3039,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.491 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.491 job2: (groupid=0, jobs=1): err= 0: pid=3218957: Sun Nov 17 15:48:05 2024 00:11:20.491 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:20.491 slat (nsec): min=8436, max=32147, avg=9220.72, stdev=1050.91 00:11:20.491 clat (usec): min=87, max=234, avg=152.33, stdev=18.62 00:11:20.491 lat (usec): min=96, max=243, avg=161.55, stdev=18.65 00:11:20.491 clat percentiles (usec): 00:11:20.491 | 1.00th=[ 101], 5.00th=[ 123], 10.00th=[ 130], 20.00th=[ 137], 00:11:20.491 | 30.00th=[ 141], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 161], 00:11:20.491 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 174], 95.00th=[ 178], 00:11:20.491 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 215], 99.95th=[ 221], 00:11:20.491 | 99.99th=[ 235] 00:11:20.491 write: IOPS=3146, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1001msec); 0 zone resets 00:11:20.491 slat (nsec): min=10478, max=39521, avg=11596.11, stdev=1193.78 00:11:20.491 clat (usec): min=77, max=221, avg=143.69, stdev=20.35 00:11:20.491 lat (usec): min=89, max=232, avg=155.29, stdev=20.30 00:11:20.491 clat percentiles (usec): 00:11:20.491 | 1.00th=[ 95], 5.00th=[ 113], 10.00th=[ 120], 20.00th=[ 126], 00:11:20.491 | 30.00th=[ 131], 40.00th=[ 137], 50.00th=[ 145], 60.00th=[ 155], 00:11:20.491 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 167], 95.00th=[ 172], 00:11:20.491 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 200], 99.95th=[ 210], 00:11:20.491 | 99.99th=[ 223] 00:11:20.491 bw ( KiB/s): min=14384, max=14384, per=23.81%, avg=14384.00, stdev= 0.00, samples=1 00:11:20.491 iops : min= 3596, max= 3596, avg=3596.00, stdev= 0.00, samples=1 00:11:20.491 lat (usec) : 100=1.56%, 250=98.44% 00:11:20.491 cpu : usr=5.80%, sys=7.60%, ctx=6222, majf=0, minf=1 00:11:20.491 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.491 issued rwts: total=3072,3150,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.491 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.491 job3: (groupid=0, jobs=1): err= 0: pid=3218963: Sun Nov 17 15:48:05 2024 00:11:20.491 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:11:20.491 slat (nsec): min=8545, max=29827, avg=9156.16, stdev=808.73 00:11:20.491 clat (usec): min=79, max=130, avg=94.32, stdev= 5.81 00:11:20.491 lat (usec): min=88, max=139, avg=103.48, stdev= 5.84 00:11:20.491 clat percentiles (usec): 00:11:20.491 | 1.00th=[ 84], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 90], 00:11:20.491 | 30.00th=[ 92], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 95], 00:11:20.491 | 70.00th=[ 97], 80.00th=[ 99], 90.00th=[ 102], 95.00th=[ 105], 00:11:20.491 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 122], 99.95th=[ 124], 00:11:20.491 | 99.99th=[ 131] 00:11:20.491 write: IOPS=4862, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1001msec); 0 zone resets 00:11:20.491 slat (nsec): min=10686, max=43256, avg=11881.74, stdev=1129.17 00:11:20.491 clat (usec): min=65, max=134, avg=90.45, stdev= 6.31 00:11:20.492 lat (usec): min=86, max=157, avg=102.33, stdev= 6.41 00:11:20.492 clat percentiles (usec): 00:11:20.492 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 84], 20.00th=[ 86], 00:11:20.492 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 91], 00:11:20.492 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 98], 95.00th=[ 102], 00:11:20.492 | 99.00th=[ 111], 99.50th=[ 118], 99.90th=[ 127], 99.95th=[ 131], 00:11:20.492 | 99.99th=[ 135] 00:11:20.492 bw ( KiB/s): min=20480, max=20480, per=33.90%, avg=20480.00, stdev= 0.00, samples=1 00:11:20.492 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:20.492 lat (usec) : 100=89.49%, 250=10.51% 00:11:20.492 cpu : usr=7.80%, sys=12.70%, ctx=9475, majf=0, minf=1 00:11:20.492 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:20.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:20.492 issued rwts: total=4608,4867,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:20.492 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:20.492 00:11:20.492 Run status group 0 (all jobs): 00:11:20.492 READ: bw=55.8MiB/s (58.5MB/s), 11.9MiB/s-18.0MiB/s (12.4MB/s-18.9MB/s), io=55.9MiB (58.6MB), run=1001-1001msec 00:11:20.492 WRITE: bw=59.0MiB/s (61.9MB/s), 12.0MiB/s-19.0MiB/s (12.6MB/s-19.9MB/s), io=59.1MiB (61.9MB), run=1001-1001msec 00:11:20.492 00:11:20.492 Disk stats (read/write): 00:11:20.492 nvme0n1: ios=3122/3584, merge=0/0, ticks=329/382, in_queue=711, util=84.47% 00:11:20.492 nvme0n2: ios=2560/2628, merge=0/0, ticks=351/346, in_queue=697, util=85.19% 00:11:20.492 nvme0n3: ios=2560/2703, merge=0/0, ticks=350/337, in_queue=687, util=88.45% 00:11:20.492 nvme0n4: ios=3792/4096, merge=0/0, ticks=321/336, in_queue=657, util=89.50% 00:11:20.492 15:48:05 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:20.492 [global] 00:11:20.492 thread=1 00:11:20.492 invalidate=1 00:11:20.492 rw=randwrite 00:11:20.492 time_based=1 00:11:20.492 runtime=1 00:11:20.492 ioengine=libaio 00:11:20.492 direct=1 00:11:20.492 bs=4096 00:11:20.492 iodepth=1 00:11:20.492 norandommap=0 00:11:20.492 numjobs=1 00:11:20.492 00:11:20.492 verify_dump=1 00:11:20.492 verify_backlog=512 00:11:20.492 verify_state_save=0 00:11:20.492 do_verify=1 00:11:20.492 verify=crc32c-intel 00:11:20.492 [job0] 00:11:20.492 filename=/dev/nvme0n1 00:11:20.492 [job1] 00:11:20.492 filename=/dev/nvme0n2 00:11:20.492 [job2] 00:11:20.492 filename=/dev/nvme0n3 00:11:20.492 [job3] 00:11:20.492 filename=/dev/nvme0n4 00:11:20.492 Could not set queue depth (nvme0n1) 00:11:20.492 Could not set queue depth (nvme0n2) 00:11:20.492 Could not set queue depth (nvme0n3) 00:11:20.492 Could not set queue depth (nvme0n4) 00:11:20.749 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.749 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.749 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.749 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:20.749 fio-3.35 00:11:20.749 Starting 4 threads 00:11:22.121 00:11:22.121 job0: (groupid=0, jobs=1): err= 0: pid=3219635: Sun Nov 17 15:48:07 2024 00:11:22.121 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:11:22.121 slat (nsec): min=8327, max=27370, avg=8984.72, stdev=769.48 00:11:22.121 clat (usec): min=74, max=206, avg=93.84, stdev=16.85 00:11:22.121 lat (usec): min=83, max=215, avg=102.83, stdev=16.96 00:11:22.121 clat percentiles (usec): 00:11:22.121 | 1.00th=[ 79], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 84], 00:11:22.121 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 91], 00:11:22.121 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 126], 95.00th=[ 137], 00:11:22.121 | 99.00th=[ 149], 99.50th=[ 159], 99.90th=[ 194], 99.95th=[ 202], 00:11:22.121 | 99.99th=[ 206] 00:11:22.121 write: IOPS=4878, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1001msec); 0 zone resets 00:11:22.121 slat (nsec): min=7563, max=42803, avg=11723.69, stdev=1202.20 00:11:22.121 clat (usec): min=70, max=194, avg=90.72, stdev=18.18 00:11:22.121 lat (usec): min=78, max=206, avg=102.44, stdev=18.25 00:11:22.121 clat percentiles (usec): 00:11:22.121 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 78], 20.00th=[ 80], 00:11:22.121 | 30.00th=[ 82], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 86], 00:11:22.121 | 70.00th=[ 89], 80.00th=[ 94], 90.00th=[ 126], 95.00th=[ 133], 00:11:22.121 | 99.00th=[ 147], 99.50th=[ 161], 99.90th=[ 186], 99.95th=[ 188], 00:11:22.121 | 99.99th=[ 196] 00:11:22.121 bw ( KiB/s): min=20480, max=20480, per=30.71%, avg=20480.00, stdev= 0.00, samples=1 00:11:22.121 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:22.121 lat (usec) : 100=84.30%, 250=15.70% 00:11:22.121 cpu : usr=7.20%, sys=9.40%, ctx=9491, majf=0, minf=1 00:11:22.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.121 issued rwts: total=4608,4883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.121 job1: (groupid=0, jobs=1): err= 0: pid=3219639: Sun Nov 17 15:48:07 2024 00:11:22.121 read: IOPS=3494, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1001msec) 00:11:22.121 slat (nsec): min=8293, max=26036, avg=9057.26, stdev=912.65 00:11:22.121 clat (usec): min=76, max=230, avg=132.68, stdev=33.74 00:11:22.121 lat (usec): min=85, max=239, avg=141.74, stdev=33.78 00:11:22.121 clat percentiles (usec): 00:11:22.121 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 93], 00:11:22.121 | 30.00th=[ 100], 40.00th=[ 135], 50.00th=[ 143], 60.00th=[ 147], 00:11:22.121 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 167], 95.00th=[ 194], 00:11:22.121 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 225], 99.95th=[ 227], 00:11:22.121 | 99.99th=[ 231] 00:11:22.121 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:11:22.121 slat (nsec): min=10060, max=40194, avg=11174.51, stdev=1138.33 00:11:22.121 clat (usec): min=70, max=221, avg=124.70, stdev=32.37 00:11:22.121 lat (usec): min=82, max=241, avg=135.87, stdev=32.34 00:11:22.121 clat percentiles (usec): 00:11:22.121 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 84], 20.00th=[ 89], 00:11:22.121 | 30.00th=[ 94], 40.00th=[ 124], 50.00th=[ 135], 60.00th=[ 139], 00:11:22.121 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 165], 95.00th=[ 184], 00:11:22.121 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 215], 99.95th=[ 215], 00:11:22.121 | 99.99th=[ 223] 00:11:22.121 bw ( KiB/s): min=13264, max=13264, per=19.89%, avg=13264.00, stdev= 0.00, samples=1 00:11:22.121 iops : min= 3316, max= 3316, avg=3316.00, stdev= 0.00, samples=1 00:11:22.121 lat (usec) : 100=32.74%, 250=67.26% 00:11:22.121 cpu : usr=5.80%, sys=9.20%, ctx=7082, majf=0, minf=1 00:11:22.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.121 issued rwts: total=3498,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.121 job2: (groupid=0, jobs=1): err= 0: pid=3219641: Sun Nov 17 15:48:07 2024 00:11:22.121 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:22.121 slat (nsec): min=8441, max=42479, avg=9227.29, stdev=1137.38 00:11:22.121 clat (usec): min=88, max=233, avg=146.47, stdev=16.96 00:11:22.121 lat (usec): min=97, max=241, avg=155.70, stdev=16.99 00:11:22.121 clat percentiles (usec): 00:11:22.121 | 1.00th=[ 100], 5.00th=[ 115], 10.00th=[ 130], 20.00th=[ 137], 00:11:22.121 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 149], 00:11:22.121 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 178], 00:11:22.121 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 206], 99.95th=[ 210], 00:11:22.121 | 99.99th=[ 233] 00:11:22.121 write: IOPS=3411, BW=13.3MiB/s (14.0MB/s)(13.3MiB/1001msec); 0 zone resets 00:11:22.121 slat (nsec): min=10276, max=39548, avg=11253.80, stdev=1415.62 00:11:22.121 clat (usec): min=77, max=214, avg=137.14, stdev=16.95 00:11:22.121 lat (usec): min=89, max=226, avg=148.40, stdev=16.94 00:11:22.121 clat percentiles (usec): 00:11:22.121 | 1.00th=[ 93], 5.00th=[ 105], 10.00th=[ 120], 20.00th=[ 127], 00:11:22.121 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:11:22.121 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 155], 95.00th=[ 169], 00:11:22.121 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 198], 99.95th=[ 204], 00:11:22.121 | 99.99th=[ 215] 00:11:22.121 bw ( KiB/s): min=13256, max=13256, per=19.88%, avg=13256.00, stdev= 0.00, samples=1 00:11:22.121 iops : min= 3314, max= 3314, avg=3314.00, stdev= 0.00, samples=1 00:11:22.121 lat (usec) : 100=2.28%, 250=97.72% 00:11:22.121 cpu : usr=5.50%, sys=8.30%, ctx=6487, majf=0, minf=1 00:11:22.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.121 issued rwts: total=3072,3415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.121 job3: (groupid=0, jobs=1): err= 0: pid=3219642: Sun Nov 17 15:48:07 2024 00:11:22.121 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:11:22.121 slat (nsec): min=8337, max=31441, avg=9110.25, stdev=967.02 00:11:22.121 clat (usec): min=70, max=159, avg=95.22, stdev= 6.48 00:11:22.121 lat (usec): min=88, max=168, avg=104.33, stdev= 6.51 00:11:22.121 clat percentiles (usec): 00:11:22.121 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 90], 00:11:22.121 | 30.00th=[ 92], 40.00th=[ 93], 50.00th=[ 95], 60.00th=[ 96], 00:11:22.121 | 70.00th=[ 98], 80.00th=[ 100], 90.00th=[ 104], 95.00th=[ 108], 00:11:22.121 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 127], 99.95th=[ 131], 00:11:22.121 | 99.99th=[ 161] 00:11:22.121 write: IOPS=4803, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1001msec); 0 zone resets 00:11:22.121 slat (nsec): min=7984, max=37596, avg=11521.92, stdev=1256.81 00:11:22.121 clat (usec): min=75, max=377, avg=91.34, stdev= 9.57 00:11:22.121 lat (usec): min=87, max=388, avg=102.86, stdev= 9.71 00:11:22.121 clat percentiles (usec): 00:11:22.121 | 1.00th=[ 80], 5.00th=[ 83], 10.00th=[ 84], 20.00th=[ 86], 00:11:22.121 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 92], 00:11:22.121 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 100], 95.00th=[ 104], 00:11:22.121 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 157], 99.95th=[ 178], 00:11:22.121 | 99.99th=[ 379] 00:11:22.121 bw ( KiB/s): min=20480, max=20480, per=30.71%, avg=20480.00, stdev= 0.00, samples=1 00:11:22.121 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:22.121 lat (usec) : 100=85.71%, 250=14.28%, 500=0.01% 00:11:22.121 cpu : usr=7.50%, sys=12.60%, ctx=9416, majf=0, minf=1 00:11:22.121 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:22.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.121 issued rwts: total=4608,4808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.121 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:22.121 00:11:22.121 Run status group 0 (all jobs): 00:11:22.121 READ: bw=61.6MiB/s (64.6MB/s), 12.0MiB/s-18.0MiB/s (12.6MB/s-18.9MB/s), io=61.7MiB (64.7MB), run=1001-1001msec 00:11:22.121 WRITE: bw=65.1MiB/s (68.3MB/s), 13.3MiB/s-19.1MiB/s (14.0MB/s-20.0MB/s), io=65.2MiB (68.4MB), run=1001-1001msec 00:11:22.121 00:11:22.121 Disk stats (read/write): 00:11:22.121 nvme0n1: ios=4145/4259, merge=0/0, ticks=360/340, in_queue=700, util=84.67% 00:11:22.121 nvme0n2: ios=2560/2912, merge=0/0, ticks=357/353, in_queue=710, util=85.32% 00:11:22.121 nvme0n3: ios=2560/2791, merge=0/0, ticks=362/356, in_queue=718, util=88.48% 00:11:22.121 nvme0n4: ios=3791/4096, merge=0/0, ticks=327/332, in_queue=659, util=89.52% 00:11:22.121 15:48:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:22.121 [global] 00:11:22.121 thread=1 00:11:22.121 invalidate=1 00:11:22.121 rw=write 00:11:22.121 time_based=1 00:11:22.121 runtime=1 00:11:22.121 ioengine=libaio 00:11:22.121 direct=1 00:11:22.121 bs=4096 00:11:22.121 iodepth=128 00:11:22.121 norandommap=0 00:11:22.121 numjobs=1 00:11:22.121 00:11:22.121 verify_dump=1 00:11:22.121 verify_backlog=512 00:11:22.121 verify_state_save=0 00:11:22.121 do_verify=1 00:11:22.121 verify=crc32c-intel 00:11:22.121 [job0] 00:11:22.121 filename=/dev/nvme0n1 00:11:22.121 [job1] 00:11:22.121 filename=/dev/nvme0n2 00:11:22.121 [job2] 00:11:22.121 filename=/dev/nvme0n3 00:11:22.121 [job3] 00:11:22.121 filename=/dev/nvme0n4 00:11:22.121 Could not set queue depth (nvme0n1) 00:11:22.121 Could not set queue depth (nvme0n2) 00:11:22.121 Could not set queue depth (nvme0n3) 00:11:22.121 Could not set queue depth (nvme0n4) 00:11:22.379 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.379 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.379 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.379 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:22.379 fio-3.35 00:11:22.379 Starting 4 threads 00:11:23.750 00:11:23.750 job0: (groupid=0, jobs=1): err= 0: pid=3220075: Sun Nov 17 15:48:08 2024 00:11:23.750 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:23.750 slat (usec): min=2, max=1399, avg=138.83, stdev=298.37 00:11:23.750 clat (usec): min=11916, max=21011, avg=17882.56, stdev=2663.40 00:11:23.750 lat (usec): min=11920, max=21658, avg=18021.39, stdev=2668.17 00:11:23.750 clat percentiles (usec): 00:11:23.750 | 1.00th=[12649], 5.00th=[13173], 10.00th=[13566], 20.00th=[13698], 00:11:23.750 | 30.00th=[18220], 40.00th=[19006], 50.00th=[19268], 60.00th=[19530], 00:11:23.750 | 70.00th=[19792], 80.00th=[19792], 90.00th=[20055], 95.00th=[20055], 00:11:23.750 | 99.00th=[20841], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:11:23.750 | 99.99th=[21103] 00:11:23.750 write: IOPS=3663, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1003msec); 0 zone resets 00:11:23.750 slat (usec): min=2, max=1539, avg=132.37, stdev=284.62 00:11:23.750 clat (usec): min=2195, max=20272, avg=16983.71, stdev=2641.22 00:11:23.750 lat (usec): min=2978, max=20281, avg=17116.08, stdev=2641.94 00:11:23.750 clat percentiles (usec): 00:11:23.750 | 1.00th=[ 6128], 5.00th=[12387], 10.00th=[12780], 20.00th=[15008], 00:11:23.750 | 30.00th=[17171], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:11:23.750 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19006], 95.00th=[19006], 00:11:23.750 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20055], 99.95th=[20317], 00:11:23.750 | 99.99th=[20317] 00:11:23.750 bw ( KiB/s): min=13680, max=14992, per=17.98%, avg=14336.00, stdev=927.72, samples=2 00:11:23.750 iops : min= 3420, max= 3748, avg=3584.00, stdev=231.93, samples=2 00:11:23.750 lat (msec) : 4=0.22%, 10=0.74%, 20=94.60%, 50=4.44% 00:11:23.750 cpu : usr=1.70%, sys=4.29%, ctx=1505, majf=0, minf=1 00:11:23.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:23.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.751 issued rwts: total=3584,3674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.751 job1: (groupid=0, jobs=1): err= 0: pid=3220076: Sun Nov 17 15:48:08 2024 00:11:23.751 read: IOPS=3884, BW=15.2MiB/s (15.9MB/s)(15.2MiB/1003msec) 00:11:23.751 slat (usec): min=2, max=2900, avg=127.38, stdev=306.70 00:11:23.751 clat (usec): min=2360, max=21030, avg=16354.69, stdev=4739.93 00:11:23.751 lat (usec): min=2741, max=21101, avg=16482.07, stdev=4767.41 00:11:23.751 clat percentiles (usec): 00:11:23.751 | 1.00th=[ 5997], 5.00th=[ 6783], 10.00th=[ 7177], 20.00th=[12649], 00:11:23.751 | 30.00th=[13173], 40.00th=[18744], 50.00th=[19006], 60.00th=[19530], 00:11:23.751 | 70.00th=[19792], 80.00th=[19792], 90.00th=[20055], 95.00th=[20055], 00:11:23.751 | 99.00th=[20841], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:11:23.751 | 99.99th=[21103] 00:11:23.751 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:11:23.751 slat (usec): min=2, max=2747, avg=118.30, stdev=278.08 00:11:23.751 clat (usec): min=5895, max=20373, avg=15333.62, stdev=4416.26 00:11:23.751 lat (usec): min=5905, max=20384, avg=15451.92, stdev=4443.34 00:11:23.751 clat percentiles (usec): 00:11:23.751 | 1.00th=[ 6194], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[11731], 00:11:23.751 | 30.00th=[12911], 40.00th=[17695], 50.00th=[18220], 60.00th=[18220], 00:11:23.751 | 70.00th=[18482], 80.00th=[18744], 90.00th=[18744], 95.00th=[19006], 00:11:23.751 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20055], 99.95th=[20317], 00:11:23.751 | 99.99th=[20317] 00:11:23.751 bw ( KiB/s): min=13664, max=19104, per=20.55%, avg=16384.00, stdev=3846.66, samples=2 00:11:23.751 iops : min= 3416, max= 4776, avg=4096.00, stdev=961.67, samples=2 00:11:23.751 lat (msec) : 4=0.16%, 10=15.89%, 20=79.95%, 50=3.99% 00:11:23.751 cpu : usr=2.30%, sys=4.19%, ctx=1470, majf=0, minf=1 00:11:23.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:23.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.751 issued rwts: total=3896,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.751 job2: (groupid=0, jobs=1): err= 0: pid=3220077: Sun Nov 17 15:48:08 2024 00:11:23.751 read: IOPS=8167, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1003msec) 00:11:23.751 slat (usec): min=2, max=3182, avg=57.95, stdev=217.85 00:11:23.751 clat (usec): min=5673, max=13727, avg=7742.67, stdev=1670.33 00:11:23.751 lat (usec): min=5919, max=13741, avg=7800.62, stdev=1682.70 00:11:23.751 clat percentiles (usec): 00:11:23.751 | 1.00th=[ 6259], 5.00th=[ 6390], 10.00th=[ 6521], 20.00th=[ 6652], 00:11:23.751 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7504], 00:11:23.751 | 70.00th=[ 7832], 80.00th=[ 8160], 90.00th=[ 9241], 95.00th=[12780], 00:11:23.751 | 99.00th=[13435], 99.50th=[13435], 99.90th=[13698], 99.95th=[13698], 00:11:23.751 | 99.99th=[13698] 00:11:23.751 write: IOPS=8518, BW=33.3MiB/s (34.9MB/s)(33.4MiB/1003msec); 0 zone resets 00:11:23.751 slat (usec): min=2, max=3268, avg=57.23, stdev=220.90 00:11:23.751 clat (usec): min=253, max=13370, avg=7433.11, stdev=1824.37 00:11:23.751 lat (usec): min=3338, max=13384, avg=7490.34, stdev=1835.12 00:11:23.751 clat percentiles (usec): 00:11:23.751 | 1.00th=[ 5932], 5.00th=[ 6063], 10.00th=[ 6128], 20.00th=[ 6325], 00:11:23.751 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7177], 00:11:23.751 | 70.00th=[ 7504], 80.00th=[ 7898], 90.00th=[ 8455], 95.00th=[12780], 00:11:23.751 | 99.00th=[13173], 99.50th=[13173], 99.90th=[13304], 99.95th=[13304], 00:11:23.751 | 99.99th=[13435] 00:11:23.751 bw ( KiB/s): min=30464, max=36864, per=42.23%, avg=33664.00, stdev=4525.48, samples=2 00:11:23.751 iops : min= 7616, max= 9216, avg=8416.00, stdev=1131.37, samples=2 00:11:23.751 lat (usec) : 500=0.01% 00:11:23.751 lat (msec) : 4=0.19%, 10=90.94%, 20=8.87% 00:11:23.751 cpu : usr=4.09%, sys=6.79%, ctx=1369, majf=0, minf=1 00:11:23.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:23.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.751 issued rwts: total=8192,8544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.751 job3: (groupid=0, jobs=1): err= 0: pid=3220078: Sun Nov 17 15:48:08 2024 00:11:23.751 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:23.751 slat (usec): min=2, max=1393, avg=138.55, stdev=298.33 00:11:23.751 clat (usec): min=11904, max=21000, avg=17872.28, stdev=2668.48 00:11:23.751 lat (usec): min=11908, max=21011, avg=18010.83, stdev=2673.24 00:11:23.751 clat percentiles (usec): 00:11:23.751 | 1.00th=[12649], 5.00th=[13173], 10.00th=[13566], 20.00th=[13698], 00:11:23.751 | 30.00th=[18220], 40.00th=[19006], 50.00th=[19268], 60.00th=[19530], 00:11:23.751 | 70.00th=[19792], 80.00th=[19792], 90.00th=[20055], 95.00th=[20055], 00:11:23.751 | 99.00th=[20841], 99.50th=[20841], 99.90th=[21103], 99.95th=[21103], 00:11:23.751 | 99.99th=[21103] 00:11:23.751 write: IOPS=3663, BW=14.3MiB/s (15.0MB/s)(14.4MiB/1003msec); 0 zone resets 00:11:23.751 slat (usec): min=2, max=1533, avg=132.67, stdev=284.51 00:11:23.751 clat (usec): min=2190, max=20266, avg=16980.32, stdev=2652.06 00:11:23.751 lat (usec): min=2194, max=20270, avg=17112.99, stdev=2653.17 00:11:23.751 clat percentiles (usec): 00:11:23.751 | 1.00th=[ 6128], 5.00th=[12387], 10.00th=[12780], 20.00th=[15008], 00:11:23.751 | 30.00th=[17171], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:11:23.751 | 70.00th=[18482], 80.00th=[18744], 90.00th=[19006], 95.00th=[19006], 00:11:23.751 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20055], 99.95th=[20317], 00:11:23.751 | 99.99th=[20317] 00:11:23.751 bw ( KiB/s): min=13664, max=15008, per=17.98%, avg=14336.00, stdev=950.35, samples=2 00:11:23.751 iops : min= 3416, max= 3752, avg=3584.00, stdev=237.59, samples=2 00:11:23.751 lat (msec) : 4=0.21%, 10=0.81%, 20=94.74%, 50=4.24% 00:11:23.751 cpu : usr=1.60%, sys=4.39%, ctx=1492, majf=0, minf=1 00:11:23.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:23.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:23.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:23.751 issued rwts: total=3584,3674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:23.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:23.751 00:11:23.751 Run status group 0 (all jobs): 00:11:23.751 READ: bw=75.0MiB/s (78.6MB/s), 14.0MiB/s-31.9MiB/s (14.6MB/s-33.5MB/s), io=75.2MiB (78.9MB), run=1003-1003msec 00:11:23.751 WRITE: bw=77.8MiB/s (81.6MB/s), 14.3MiB/s-33.3MiB/s (15.0MB/s-34.9MB/s), io=78.1MiB (81.9MB), run=1003-1003msec 00:11:23.751 00:11:23.751 Disk stats (read/write): 00:11:23.751 nvme0n1: ios=2748/3072, merge=0/0, ticks=13024/13562, in_queue=26586, util=84.77% 00:11:23.751 nvme0n2: ios=2736/3072, merge=0/0, ticks=13119/13498, in_queue=26617, util=85.42% 00:11:23.751 nvme0n3: ios=7168/7547, merge=0/0, ticks=12946/13072, in_queue=26018, util=88.49% 00:11:23.751 nvme0n4: ios=2704/3072, merge=0/0, ticks=13066/13560, in_queue=26626, util=89.33% 00:11:23.751 15:48:08 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:23.751 [global] 00:11:23.751 thread=1 00:11:23.751 invalidate=1 00:11:23.751 rw=randwrite 00:11:23.751 time_based=1 00:11:23.751 runtime=1 00:11:23.751 ioengine=libaio 00:11:23.751 direct=1 00:11:23.751 bs=4096 00:11:23.751 iodepth=128 00:11:23.751 norandommap=0 00:11:23.751 numjobs=1 00:11:23.751 00:11:23.751 verify_dump=1 00:11:23.751 verify_backlog=512 00:11:23.751 verify_state_save=0 00:11:23.751 do_verify=1 00:11:23.751 verify=crc32c-intel 00:11:23.751 [job0] 00:11:23.751 filename=/dev/nvme0n1 00:11:23.751 [job1] 00:11:23.751 filename=/dev/nvme0n2 00:11:23.751 [job2] 00:11:23.751 filename=/dev/nvme0n3 00:11:23.751 [job3] 00:11:23.751 filename=/dev/nvme0n4 00:11:23.751 Could not set queue depth (nvme0n1) 00:11:23.751 Could not set queue depth (nvme0n2) 00:11:23.751 Could not set queue depth (nvme0n3) 00:11:23.751 Could not set queue depth (nvme0n4) 00:11:24.008 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.008 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.008 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.008 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:24.008 fio-3.35 00:11:24.008 Starting 4 threads 00:11:25.379 00:11:25.379 job0: (groupid=0, jobs=1): err= 0: pid=3220494: Sun Nov 17 15:48:10 2024 00:11:25.379 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:11:25.379 slat (usec): min=2, max=1328, avg=93.33, stdev=234.18 00:11:25.379 clat (usec): min=10804, max=13799, avg=12186.03, stdev=370.40 00:11:25.379 lat (usec): min=11279, max=13807, avg=12279.36, stdev=293.13 00:11:25.379 clat percentiles (usec): 00:11:25.379 | 1.00th=[11207], 5.00th=[11469], 10.00th=[11600], 20.00th=[11863], 00:11:25.379 | 30.00th=[12125], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:11:25.379 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:11:25.379 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13829], 99.95th=[13829], 00:11:25.379 | 99.99th=[13829] 00:11:25.379 write: IOPS=5612, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:11:25.379 slat (usec): min=2, max=1115, avg=87.74, stdev=220.14 00:11:25.379 clat (usec): min=2057, max=13461, avg=11438.31, stdev=824.76 00:11:25.379 lat (usec): min=2725, max=13464, avg=11526.05, stdev=796.51 00:11:25.379 clat percentiles (usec): 00:11:25.379 | 1.00th=[ 7046], 5.00th=[10683], 10.00th=[11076], 20.00th=[11338], 00:11:25.380 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:11:25.380 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:11:25.380 | 99.00th=[12649], 99.50th=[12780], 99.90th=[12780], 99.95th=[12780], 00:11:25.380 | 99.99th=[13435] 00:11:25.380 bw ( KiB/s): min=21796, max=22176, per=27.02%, avg=21986.00, stdev=268.70, samples=2 00:11:25.380 iops : min= 5449, max= 5544, avg=5496.50, stdev=67.18, samples=2 00:11:25.380 lat (msec) : 4=0.19%, 10=0.77%, 20=99.04% 00:11:25.380 cpu : usr=3.49%, sys=6.69%, ctx=1656, majf=0, minf=1 00:11:25.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:25.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.380 issued rwts: total=5120,5629,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.380 job1: (groupid=0, jobs=1): err= 0: pid=3220495: Sun Nov 17 15:48:10 2024 00:11:25.380 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:11:25.380 slat (usec): min=2, max=1271, avg=93.39, stdev=234.18 00:11:25.380 clat (usec): min=10813, max=13768, avg=12186.08, stdev=373.00 00:11:25.380 lat (usec): min=11156, max=13775, avg=12279.46, stdev=299.44 00:11:25.380 clat percentiles (usec): 00:11:25.380 | 1.00th=[11207], 5.00th=[11469], 10.00th=[11600], 20.00th=[11863], 00:11:25.380 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:11:25.380 | 70.00th=[12387], 80.00th=[12387], 90.00th=[12518], 95.00th=[12649], 00:11:25.380 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13698], 99.95th=[13698], 00:11:25.380 | 99.99th=[13829] 00:11:25.380 write: IOPS=5603, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1003msec); 0 zone resets 00:11:25.380 slat (usec): min=2, max=1069, avg=87.79, stdev=220.00 00:11:25.380 clat (usec): min=2060, max=13464, avg=11450.64, stdev=801.38 00:11:25.380 lat (usec): min=2064, max=13467, avg=11538.43, stdev=770.77 00:11:25.380 clat percentiles (usec): 00:11:25.380 | 1.00th=[ 7767], 5.00th=[10683], 10.00th=[11076], 20.00th=[11207], 00:11:25.380 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11469], 60.00th=[11600], 00:11:25.380 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:11:25.380 | 99.00th=[12649], 99.50th=[12780], 99.90th=[13435], 99.95th=[13435], 00:11:25.380 | 99.99th=[13435] 00:11:25.380 bw ( KiB/s): min=21800, max=22144, per=27.00%, avg=21972.00, stdev=243.24, samples=2 00:11:25.380 iops : min= 5450, max= 5536, avg=5493.00, stdev=60.81, samples=2 00:11:25.380 lat (msec) : 4=0.14%, 10=0.69%, 20=99.17% 00:11:25.380 cpu : usr=2.69%, sys=7.58%, ctx=1624, majf=0, minf=1 00:11:25.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:25.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.380 issued rwts: total=5120,5620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.380 job2: (groupid=0, jobs=1): err= 0: pid=3220502: Sun Nov 17 15:48:10 2024 00:11:25.380 read: IOPS=4532, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1006msec) 00:11:25.380 slat (usec): min=2, max=1770, avg=109.93, stdev=283.69 00:11:25.380 clat (usec): min=4656, max=19001, avg=14299.45, stdev=877.89 00:11:25.380 lat (usec): min=5502, max=19026, avg=14409.38, stdev=866.03 00:11:25.380 clat percentiles (usec): 00:11:25.380 | 1.00th=[10159], 5.00th=[13435], 10.00th=[13698], 20.00th=[14091], 00:11:25.380 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14353], 60.00th=[14484], 00:11:25.380 | 70.00th=[14615], 80.00th=[14615], 90.00th=[14746], 95.00th=[15008], 00:11:25.380 | 99.00th=[15795], 99.50th=[16450], 99.90th=[18220], 99.95th=[19006], 00:11:25.380 | 99.99th=[19006] 00:11:25.380 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:11:25.380 slat (usec): min=2, max=1805, avg=103.06, stdev=267.33 00:11:25.380 clat (usec): min=8096, max=15249, avg=13485.55, stdev=545.59 00:11:25.380 lat (usec): min=8106, max=15913, avg=13588.61, stdev=528.13 00:11:25.380 clat percentiles (usec): 00:11:25.380 | 1.00th=[11994], 5.00th=[12649], 10.00th=[12911], 20.00th=[13304], 00:11:25.380 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:11:25.380 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[13960], 00:11:25.380 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14746], 99.95th=[14877], 00:11:25.380 | 99.99th=[15270] 00:11:25.380 bw ( KiB/s): min=17096, max=19768, per=22.65%, avg=18432.00, stdev=1889.39, samples=2 00:11:25.380 iops : min= 4274, max= 4942, avg=4608.00, stdev=472.35, samples=2 00:11:25.380 lat (msec) : 10=0.75%, 20=99.25% 00:11:25.380 cpu : usr=2.99%, sys=6.17%, ctx=1230, majf=0, minf=2 00:11:25.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:25.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.380 issued rwts: total=4560,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.380 job3: (groupid=0, jobs=1): err= 0: pid=3220505: Sun Nov 17 15:48:10 2024 00:11:25.380 read: IOPS=4534, BW=17.7MiB/s (18.6MB/s)(17.8MiB/1006msec) 00:11:25.380 slat (usec): min=2, max=1944, avg=109.74, stdev=285.26 00:11:25.380 clat (usec): min=4615, max=18983, avg=14300.57, stdev=873.26 00:11:25.380 lat (usec): min=5476, max=18986, avg=14410.30, stdev=859.15 00:11:25.380 clat percentiles (usec): 00:11:25.380 | 1.00th=[10945], 5.00th=[13435], 10.00th=[13698], 20.00th=[14091], 00:11:25.380 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14353], 60.00th=[14484], 00:11:25.380 | 70.00th=[14484], 80.00th=[14615], 90.00th=[14746], 95.00th=[15008], 00:11:25.380 | 99.00th=[15795], 99.50th=[17171], 99.90th=[19006], 99.95th=[19006], 00:11:25.380 | 99.99th=[19006] 00:11:25.380 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:11:25.380 slat (usec): min=2, max=2485, avg=103.31, stdev=271.00 00:11:25.380 clat (usec): min=8096, max=15757, avg=13478.27, stdev=557.00 00:11:25.380 lat (usec): min=8105, max=16448, avg=13581.58, stdev=545.26 00:11:25.380 clat percentiles (usec): 00:11:25.380 | 1.00th=[11994], 5.00th=[12649], 10.00th=[12911], 20.00th=[13304], 00:11:25.380 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13566], 60.00th=[13698], 00:11:25.380 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:11:25.380 | 99.00th=[14222], 99.50th=[14484], 99.90th=[14877], 99.95th=[15270], 00:11:25.380 | 99.99th=[15795] 00:11:25.380 bw ( KiB/s): min=17024, max=19840, per=22.65%, avg=18432.00, stdev=1991.21, samples=2 00:11:25.380 iops : min= 4256, max= 4960, avg=4608.00, stdev=497.80, samples=2 00:11:25.380 lat (msec) : 10=0.69%, 20=99.31% 00:11:25.380 cpu : usr=3.38%, sys=5.77%, ctx=1278, majf=0, minf=2 00:11:25.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:25.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:25.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:25.380 issued rwts: total=4562,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:25.380 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:25.380 00:11:25.380 Run status group 0 (all jobs): 00:11:25.380 READ: bw=75.2MiB/s (78.8MB/s), 17.7MiB/s-19.9MiB/s (18.6MB/s-20.9MB/s), io=75.6MiB (79.3MB), run=1003-1006msec 00:11:25.380 WRITE: bw=79.5MiB/s (83.3MB/s), 17.9MiB/s-21.9MiB/s (18.8MB/s-23.0MB/s), io=79.9MiB (83.8MB), run=1003-1006msec 00:11:25.380 00:11:25.380 Disk stats (read/write): 00:11:25.380 nvme0n1: ios=4389/4608, merge=0/0, ticks=13178/13083, in_queue=26261, util=84.57% 00:11:25.381 nvme0n2: ios=4332/4608, merge=0/0, ticks=13143/13079, in_queue=26222, util=85.22% 00:11:25.381 nvme0n3: ios=3584/4014, merge=0/0, ticks=25325/26422, in_queue=51747, util=88.48% 00:11:25.381 nvme0n4: ios=3584/4018, merge=0/0, ticks=25306/26512, in_queue=51818, util=89.52% 00:11:25.381 15:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:25.381 15:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3220763 00:11:25.381 15:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:25.381 15:48:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:25.381 [global] 00:11:25.381 thread=1 00:11:25.381 invalidate=1 00:11:25.381 rw=read 00:11:25.381 time_based=1 00:11:25.381 runtime=10 00:11:25.381 ioengine=libaio 00:11:25.381 direct=1 00:11:25.381 bs=4096 00:11:25.381 iodepth=1 00:11:25.381 norandommap=1 00:11:25.381 numjobs=1 00:11:25.381 00:11:25.381 [job0] 00:11:25.381 filename=/dev/nvme0n1 00:11:25.381 [job1] 00:11:25.381 filename=/dev/nvme0n2 00:11:25.381 [job2] 00:11:25.381 filename=/dev/nvme0n3 00:11:25.381 [job3] 00:11:25.381 filename=/dev/nvme0n4 00:11:25.381 Could not set queue depth (nvme0n1) 00:11:25.381 Could not set queue depth (nvme0n2) 00:11:25.381 Could not set queue depth (nvme0n3) 00:11:25.381 Could not set queue depth (nvme0n4) 00:11:25.638 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.638 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.638 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.638 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.638 fio-3.35 00:11:25.638 Starting 4 threads 00:11:28.166 15:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:28.424 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=68882432, buflen=4096 00:11:28.424 fio: pid=3220969, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:28.424 15:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:28.682 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=82894848, buflen=4096 00:11:28.682 fio: pid=3220963, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:28.682 15:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.682 15:48:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:28.682 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=17195008, buflen=4096 00:11:28.682 fio: pid=3220940, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:28.939 15:48:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:28.939 15:48:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:29.207 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=62054400, buflen=4096 00:11:29.207 fio: pid=3220945, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:29.207 00:11:29.207 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220940: Sun Nov 17 15:48:14 2024 00:11:29.208 read: IOPS=6682, BW=26.1MiB/s (27.4MB/s)(80.4MiB/3080msec) 00:11:29.208 slat (usec): min=7, max=12872, avg=11.28, stdev=148.17 00:11:29.208 clat (usec): min=56, max=470, avg=135.90, stdev=33.13 00:11:29.208 lat (usec): min=64, max=12963, avg=147.18, stdev=151.48 00:11:29.208 clat percentiles (usec): 00:11:29.208 | 1.00th=[ 64], 5.00th=[ 81], 10.00th=[ 85], 20.00th=[ 100], 00:11:29.208 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 143], 00:11:29.208 | 70.00th=[ 147], 80.00th=[ 165], 90.00th=[ 180], 95.00th=[ 186], 00:11:29.208 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 241], 99.95th=[ 245], 00:11:29.208 | 99.99th=[ 359] 00:11:29.208 bw ( KiB/s): min=21552, max=26880, per=24.42%, avg=25193.60, stdev=2188.49, samples=5 00:11:29.208 iops : min= 5388, max= 6720, avg=6298.40, stdev=547.12, samples=5 00:11:29.208 lat (usec) : 100=19.96%, 250=80.00%, 500=0.03% 00:11:29.208 cpu : usr=2.99%, sys=9.55%, ctx=20588, majf=0, minf=1 00:11:29.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.208 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.208 issued rwts: total=20583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.208 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220945: Sun Nov 17 15:48:14 2024 00:11:29.208 read: IOPS=9122, BW=35.6MiB/s (37.4MB/s)(123MiB/3457msec) 00:11:29.208 slat (usec): min=6, max=11978, avg=10.78, stdev=140.36 00:11:29.208 clat (usec): min=49, max=21472, avg=96.65, stdev=171.07 00:11:29.208 lat (usec): min=59, max=21481, avg=107.43, stdev=221.36 00:11:29.208 clat percentiles (usec): 00:11:29.208 | 1.00th=[ 60], 5.00th=[ 63], 10.00th=[ 68], 20.00th=[ 80], 00:11:29.208 | 30.00th=[ 82], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 89], 00:11:29.208 | 70.00th=[ 93], 80.00th=[ 130], 90.00th=[ 139], 95.00th=[ 145], 00:11:29.208 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 194], 99.95th=[ 215], 00:11:29.208 | 99.99th=[ 482] 00:11:29.208 bw ( KiB/s): min=26992, max=41800, per=34.36%, avg=35454.33, stdev=6277.72, samples=6 00:11:29.208 iops : min= 6748, max=10450, avg=8863.50, stdev=1569.51, samples=6 00:11:29.208 lat (usec) : 50=0.01%, 100=74.93%, 250=25.05%, 500=0.01% 00:11:29.208 lat (msec) : 2=0.01%, 50=0.01% 00:11:29.208 cpu : usr=4.34%, sys=12.73%, ctx=31541, majf=0, minf=2 00:11:29.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.208 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.208 issued rwts: total=31535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.208 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220963: Sun Nov 17 15:48:14 2024 00:11:29.208 read: IOPS=7049, BW=27.5MiB/s (28.9MB/s)(79.1MiB/2871msec) 00:11:29.208 slat (usec): min=5, max=11707, avg=10.94, stdev=107.67 00:11:29.208 clat (usec): min=71, max=20657, avg=128.38, stdev=148.05 00:11:29.208 lat (usec): min=84, max=20671, avg=139.31, stdev=183.05 00:11:29.208 clat percentiles (usec): 00:11:29.208 | 1.00th=[ 86], 5.00th=[ 89], 10.00th=[ 92], 20.00th=[ 95], 00:11:29.208 | 30.00th=[ 99], 40.00th=[ 106], 50.00th=[ 133], 60.00th=[ 137], 00:11:29.208 | 70.00th=[ 141], 80.00th=[ 151], 90.00th=[ 178], 95.00th=[ 184], 00:11:29.208 | 99.00th=[ 221], 99.50th=[ 229], 99.90th=[ 243], 99.95th=[ 249], 00:11:29.208 | 99.99th=[ 375] 00:11:29.208 bw ( KiB/s): min=20904, max=35344, per=27.66%, avg=28539.20, stdev=6379.30, samples=5 00:11:29.208 iops : min= 5226, max= 8836, avg=7134.80, stdev=1594.83, samples=5 00:11:29.208 lat (usec) : 100=32.42%, 250=67.53%, 500=0.04% 00:11:29.208 lat (msec) : 50=0.01% 00:11:29.208 cpu : usr=3.28%, sys=10.52%, ctx=20241, majf=0, minf=2 00:11:29.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.208 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.208 issued rwts: total=20239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.208 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3220969: Sun Nov 17 15:48:14 2024 00:11:29.208 read: IOPS=6308, BW=24.6MiB/s (25.8MB/s)(65.7MiB/2666msec) 00:11:29.208 slat (nsec): min=8083, max=55306, avg=10258.33, stdev=2564.94 00:11:29.208 clat (usec): min=80, max=574, avg=145.50, stdev=24.84 00:11:29.208 lat (usec): min=90, max=583, avg=155.76, stdev=24.81 00:11:29.208 clat percentiles (usec): 00:11:29.208 | 1.00th=[ 92], 5.00th=[ 101], 10.00th=[ 124], 20.00th=[ 133], 00:11:29.208 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:11:29.208 | 70.00th=[ 149], 80.00th=[ 172], 90.00th=[ 182], 95.00th=[ 186], 00:11:29.208 | 99.00th=[ 215], 99.50th=[ 225], 99.90th=[ 241], 99.95th=[ 245], 00:11:29.208 | 99.99th=[ 498] 00:11:29.208 bw ( KiB/s): min=21432, max=28224, per=24.69%, avg=25478.40, stdev=2625.84, samples=5 00:11:29.208 iops : min= 5358, max= 7056, avg=6369.60, stdev=656.46, samples=5 00:11:29.208 lat (usec) : 100=4.55%, 250=95.41%, 500=0.02%, 750=0.01% 00:11:29.208 cpu : usr=2.96%, sys=9.53%, ctx=16819, majf=0, minf=2 00:11:29.208 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:29.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.208 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.208 issued rwts: total=16818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.208 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:29.208 00:11:29.208 Run status group 0 (all jobs): 00:11:29.208 READ: bw=101MiB/s (106MB/s), 24.6MiB/s-35.6MiB/s (25.8MB/s-37.4MB/s), io=348MiB (365MB), run=2666-3457msec 00:11:29.208 00:11:29.208 Disk stats (read/write): 00:11:29.208 nvme0n1: ios=18341/0, merge=0/0, ticks=2477/0, in_queue=2477, util=94.06% 00:11:29.208 nvme0n2: ios=30229/0, merge=0/0, ticks=2707/0, in_queue=2707, util=94.34% 00:11:29.208 nvme0n3: ios=20133/0, merge=0/0, ticks=2406/0, in_queue=2406, util=95.68% 00:11:29.208 nvme0n4: ios=16466/0, merge=0/0, ticks=2228/0, in_queue=2228, util=96.42% 00:11:29.208 15:48:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.208 15:48:14 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:29.806 15:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:29.806 15:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:30.071 15:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.071 15:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:30.636 15:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.636 15:48:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:30.893 15:48:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:30.894 15:48:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:31.151 15:48:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:31.151 15:48:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3220763 00:11:31.151 15:48:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:31.151 15:48:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.083 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.084 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:32.084 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:32.084 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.084 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:32.084 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.084 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:32.084 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:32.084 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:32.084 nvmf hotplug test: fio failed as expected 00:11:32.084 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:32.341 rmmod nvme_rdma 00:11:32.341 rmmod nvme_fabrics 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3217115 ']' 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3217115 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3217115 ']' 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3217115 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.341 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3217115 00:11:32.599 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.599 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.599 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3217115' 00:11:32.599 killing process with pid 3217115 00:11:32.599 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3217115 00:11:32.599 15:48:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3217115 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:34.497 00:11:34.497 real 0m30.418s 00:11:34.497 user 2m19.299s 00:11:34.497 sys 0m10.674s 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.497 ************************************ 00:11:34.497 END TEST nvmf_fio_target 00:11:34.497 ************************************ 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:34.497 ************************************ 00:11:34.497 START TEST nvmf_bdevio 00:11:34.497 ************************************ 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:11:34.497 * Looking for test storage... 00:11:34.497 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.497 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:34.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.498 --rc genhtml_branch_coverage=1 00:11:34.498 --rc genhtml_function_coverage=1 00:11:34.498 --rc genhtml_legend=1 00:11:34.498 --rc geninfo_all_blocks=1 00:11:34.498 --rc geninfo_unexecuted_blocks=1 00:11:34.498 00:11:34.498 ' 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:34.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.498 --rc genhtml_branch_coverage=1 00:11:34.498 --rc genhtml_function_coverage=1 00:11:34.498 --rc genhtml_legend=1 00:11:34.498 --rc geninfo_all_blocks=1 00:11:34.498 --rc geninfo_unexecuted_blocks=1 00:11:34.498 00:11:34.498 ' 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:34.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.498 --rc genhtml_branch_coverage=1 00:11:34.498 --rc genhtml_function_coverage=1 00:11:34.498 --rc genhtml_legend=1 00:11:34.498 --rc geninfo_all_blocks=1 00:11:34.498 --rc geninfo_unexecuted_blocks=1 00:11:34.498 00:11:34.498 ' 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:34.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.498 --rc genhtml_branch_coverage=1 00:11:34.498 --rc genhtml_function_coverage=1 00:11:34.498 --rc genhtml_legend=1 00:11:34.498 --rc geninfo_all_blocks=1 00:11:34.498 --rc geninfo_unexecuted_blocks=1 00:11:34.498 00:11:34.498 ' 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.498 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:34.498 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.499 15:48:19 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:41.057 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:41.057 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:41.057 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:41.057 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # rdma_device_init 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:41.057 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:41.058 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.058 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:41.058 altname enp217s0f0np0 00:11:41.058 altname ens818f0np0 00:11:41.058 inet 192.168.100.8/24 scope global mlx_0_0 00:11:41.058 valid_lft forever preferred_lft forever 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:41.058 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:41.058 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:41.058 altname enp217s0f1np1 00:11:41.058 altname ens818f1np1 00:11:41.058 inet 192.168.100.9/24 scope global mlx_0_1 00:11:41.058 valid_lft forever preferred_lft forever 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:41.058 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:41.319 192.168.100.9' 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:41.319 192.168.100.9' 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # head -n 1 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:41.319 192.168.100.9' 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # tail -n +2 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # head -n 1 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3225855 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3225855 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3225855 ']' 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.319 15:48:26 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.319 [2024-11-17 15:48:26.750780] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:41.319 [2024-11-17 15:48:26.750876] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.576 [2024-11-17 15:48:26.886238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.577 [2024-11-17 15:48:26.985981] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.577 [2024-11-17 15:48:26.986028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.577 [2024-11-17 15:48:26.986040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.577 [2024-11-17 15:48:26.986053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.577 [2024-11-17 15:48:26.986063] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.577 [2024-11-17 15:48:26.988589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:41.577 [2024-11-17 15:48:26.988659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:41.577 [2024-11-17 15:48:26.988726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.577 [2024-11-17 15:48:26.988752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:42.141 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.141 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:42.141 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.141 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.141 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.141 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.141 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:42.142 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.142 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.142 [2024-11-17 15:48:27.634795] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7efe90b76940) succeed. 00:11:42.142 [2024-11-17 15:48:27.644235] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7efe90b32940) succeed. 00:11:42.399 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.399 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:42.399 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.399 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.658 Malloc0 00:11:42.658 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.658 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.658 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.658 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.658 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.658 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.658 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.658 15:48:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.658 [2024-11-17 15:48:28.013012] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:42.658 { 00:11:42.658 "params": { 00:11:42.658 "name": "Nvme$subsystem", 00:11:42.658 "trtype": "$TEST_TRANSPORT", 00:11:42.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:42.658 "adrfam": "ipv4", 00:11:42.658 "trsvcid": "$NVMF_PORT", 00:11:42.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:42.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:42.658 "hdgst": ${hdgst:-false}, 00:11:42.658 "ddgst": ${ddgst:-false} 00:11:42.658 }, 00:11:42.658 "method": "bdev_nvme_attach_controller" 00:11:42.658 } 00:11:42.658 EOF 00:11:42.658 )") 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:42.658 15:48:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:42.658 "params": { 00:11:42.658 "name": "Nvme1", 00:11:42.658 "trtype": "rdma", 00:11:42.658 "traddr": "192.168.100.8", 00:11:42.658 "adrfam": "ipv4", 00:11:42.658 "trsvcid": "4420", 00:11:42.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.658 "hdgst": false, 00:11:42.658 "ddgst": false 00:11:42.658 }, 00:11:42.658 "method": "bdev_nvme_attach_controller" 00:11:42.658 }' 00:11:42.658 [2024-11-17 15:48:28.096894] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:42.658 [2024-11-17 15:48:28.096985] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3226048 ] 00:11:42.916 [2024-11-17 15:48:28.232088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.916 [2024-11-17 15:48:28.340914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.916 [2024-11-17 15:48:28.340981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.916 [2024-11-17 15:48:28.340986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.482 I/O targets: 00:11:43.482 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:43.482 00:11:43.482 00:11:43.482 CUnit - A unit testing framework for C - Version 2.1-3 00:11:43.482 http://cunit.sourceforge.net/ 00:11:43.482 00:11:43.482 00:11:43.482 Suite: bdevio tests on: Nvme1n1 00:11:43.482 Test: blockdev write read block ...passed 00:11:43.482 Test: blockdev write zeroes read block ...passed 00:11:43.482 Test: blockdev write zeroes read no split ...passed 00:11:43.482 Test: blockdev write zeroes read split ...passed 00:11:43.482 Test: blockdev write zeroes read split partial ...passed 00:11:43.482 Test: blockdev reset ...[2024-11-17 15:48:28.809942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:43.482 [2024-11-17 15:48:28.845615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:11:43.482 [2024-11-17 15:48:28.879452] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:43.482 passed 00:11:43.482 Test: blockdev write read 8 blocks ...passed 00:11:43.482 Test: blockdev write read size > 128k ...passed 00:11:43.482 Test: blockdev write read invalid size ...passed 00:11:43.482 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:43.482 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:43.482 Test: blockdev write read max offset ...passed 00:11:43.482 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:43.482 Test: blockdev writev readv 8 blocks ...passed 00:11:43.482 Test: blockdev writev readv 30 x 1block ...passed 00:11:43.482 Test: blockdev writev readv block ...passed 00:11:43.482 Test: blockdev writev readv size > 128k ...passed 00:11:43.482 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:43.483 Test: blockdev comparev and writev ...[2024-11-17 15:48:28.884861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.483 [2024-11-17 15:48:28.884902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:43.483 [2024-11-17 15:48:28.884921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.483 [2024-11-17 15:48:28.884937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:43.483 [2024-11-17 15:48:28.885138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.483 [2024-11-17 15:48:28.885157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:43.483 [2024-11-17 15:48:28.885171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.483 [2024-11-17 15:48:28.885187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:43.483 [2024-11-17 15:48:28.885375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.483 [2024-11-17 15:48:28.885395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:43.483 [2024-11-17 15:48:28.885409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.483 [2024-11-17 15:48:28.885423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:43.483 [2024-11-17 15:48:28.885619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.483 [2024-11-17 15:48:28.885640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:43.483 [2024-11-17 15:48:28.885655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.483 [2024-11-17 15:48:28.885670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:43.483 passed 00:11:43.483 Test: blockdev nvme passthru rw ...passed 00:11:43.483 Test: blockdev nvme passthru vendor specific ...[2024-11-17 15:48:28.886012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:43.483 [2024-11-17 15:48:28.886034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:43.483 [2024-11-17 15:48:28.886097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:43.483 [2024-11-17 15:48:28.886114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:43.483 [2024-11-17 15:48:28.886167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:43.483 [2024-11-17 15:48:28.886184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:43.483 [2024-11-17 15:48:28.886236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:43.483 [2024-11-17 15:48:28.886252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:43.483 passed 00:11:43.483 Test: blockdev nvme admin passthru ...passed 00:11:43.483 Test: blockdev copy ...passed 00:11:43.483 00:11:43.483 Run Summary: Type Total Ran Passed Failed Inactive 00:11:43.483 suites 1 1 n/a 0 0 00:11:43.483 tests 23 23 23 0 0 00:11:43.483 asserts 152 152 152 0 n/a 00:11:43.483 00:11:43.483 Elapsed time = 0.358 seconds 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:44.416 rmmod nvme_rdma 00:11:44.416 rmmod nvme_fabrics 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:44.416 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3225855 ']' 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3225855 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3225855 ']' 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3225855 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3225855 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3225855' 00:11:44.417 killing process with pid 3225855 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3225855 00:11:44.417 15:48:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3225855 00:11:46.316 15:48:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:46.316 15:48:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:11:46.316 00:11:46.316 real 0m12.111s 00:11:46.316 user 0m22.819s 00:11:46.316 sys 0m6.151s 00:11:46.316 15:48:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.316 15:48:31 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:46.316 ************************************ 00:11:46.316 END TEST nvmf_bdevio 00:11:46.316 ************************************ 00:11:46.316 15:48:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:46.316 00:11:46.316 real 4m40.454s 00:11:46.316 user 12m27.379s 00:11:46.316 sys 1m41.225s 00:11:46.316 15:48:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.316 15:48:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:46.316 ************************************ 00:11:46.316 END TEST nvmf_target_core 00:11:46.316 ************************************ 00:11:46.316 15:48:31 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:46.316 15:48:31 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.316 15:48:31 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.316 15:48:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:46.316 ************************************ 00:11:46.316 START TEST nvmf_target_extra 00:11:46.316 ************************************ 00:11:46.316 15:48:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:46.576 * Looking for test storage... 00:11:46.576 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:11:46.576 15:48:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:46.576 15:48:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:46.576 15:48:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:46.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.576 --rc genhtml_branch_coverage=1 00:11:46.576 --rc genhtml_function_coverage=1 00:11:46.576 --rc genhtml_legend=1 00:11:46.576 --rc geninfo_all_blocks=1 00:11:46.576 --rc geninfo_unexecuted_blocks=1 00:11:46.576 00:11:46.576 ' 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:46.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.576 --rc genhtml_branch_coverage=1 00:11:46.576 --rc genhtml_function_coverage=1 00:11:46.576 --rc genhtml_legend=1 00:11:46.576 --rc geninfo_all_blocks=1 00:11:46.576 --rc geninfo_unexecuted_blocks=1 00:11:46.576 00:11:46.576 ' 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:46.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.576 --rc genhtml_branch_coverage=1 00:11:46.576 --rc genhtml_function_coverage=1 00:11:46.576 --rc genhtml_legend=1 00:11:46.576 --rc geninfo_all_blocks=1 00:11:46.576 --rc geninfo_unexecuted_blocks=1 00:11:46.576 00:11:46.576 ' 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:46.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.576 --rc genhtml_branch_coverage=1 00:11:46.576 --rc genhtml_function_coverage=1 00:11:46.576 --rc genhtml_legend=1 00:11:46.576 --rc geninfo_all_blocks=1 00:11:46.576 --rc geninfo_unexecuted_blocks=1 00:11:46.576 00:11:46.576 ' 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.576 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.577 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.577 ************************************ 00:11:46.577 START TEST nvmf_example 00:11:46.577 ************************************ 00:11:46.577 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:46.837 * Looking for test storage... 00:11:46.838 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:46.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.838 --rc genhtml_branch_coverage=1 00:11:46.838 --rc genhtml_function_coverage=1 00:11:46.838 --rc genhtml_legend=1 00:11:46.838 --rc geninfo_all_blocks=1 00:11:46.838 --rc geninfo_unexecuted_blocks=1 00:11:46.838 00:11:46.838 ' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:46.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.838 --rc genhtml_branch_coverage=1 00:11:46.838 --rc genhtml_function_coverage=1 00:11:46.838 --rc genhtml_legend=1 00:11:46.838 --rc geninfo_all_blocks=1 00:11:46.838 --rc geninfo_unexecuted_blocks=1 00:11:46.838 00:11:46.838 ' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:46.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.838 --rc genhtml_branch_coverage=1 00:11:46.838 --rc genhtml_function_coverage=1 00:11:46.838 --rc genhtml_legend=1 00:11:46.838 --rc geninfo_all_blocks=1 00:11:46.838 --rc geninfo_unexecuted_blocks=1 00:11:46.838 00:11:46.838 ' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:46.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.838 --rc genhtml_branch_coverage=1 00:11:46.838 --rc genhtml_function_coverage=1 00:11:46.838 --rc genhtml_legend=1 00:11:46.838 --rc geninfo_all_blocks=1 00:11:46.838 --rc geninfo_unexecuted_blocks=1 00:11:46.838 00:11:46.838 ' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.838 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.838 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.839 15:48:32 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:11:53.394 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:11:53.394 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.394 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:11:53.395 Found net devices under 0000:d9:00.0: mlx_0_0 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:11:53.395 Found net devices under 0000:d9:00.1: mlx_0_1 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # rdma_device_init 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@530 -- # allocate_nic_ips 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:53.395 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:53.395 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:11:53.395 altname enp217s0f0np0 00:11:53.395 altname ens818f0np0 00:11:53.395 inet 192.168.100.8/24 scope global mlx_0_0 00:11:53.395 valid_lft forever preferred_lft forever 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:53.395 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:53.395 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:11:53.395 altname enp217s0f1np1 00:11:53.395 altname ens818f1np1 00:11:53.395 inet 192.168.100.9/24 scope global mlx_0_1 00:11:53.395 valid_lft forever preferred_lft forever 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:53.395 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:11:53.396 192.168.100.9' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:11:53.396 192.168.100.9' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # head -n 1 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:11:53.396 192.168.100.9' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # tail -n +2 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # head -n 1 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3230058 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3230058 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3230058 ']' 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:53.396 15:48:38 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.326 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.326 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:54.326 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:54.326 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.326 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.326 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:54.326 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.326 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.583 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.583 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:54.583 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.583 15:48:39 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:54.583 15:48:40 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:06.870 Initializing NVMe Controllers 00:12:06.870 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:06.870 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:06.870 Initialization complete. Launching workers. 00:12:06.870 ======================================================== 00:12:06.870 Latency(us) 00:12:06.870 Device Information : IOPS MiB/s Average min max 00:12:06.870 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 22938.70 89.60 2789.48 743.43 12089.43 00:12:06.870 ======================================================== 00:12:06.870 Total : 22938.70 89.60 2789.48 743.43 12089.43 00:12:06.870 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:06.870 rmmod nvme_rdma 00:12:06.870 rmmod nvme_fabrics 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3230058 ']' 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3230058 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3230058 ']' 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3230058 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3230058 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3230058' 00:12:06.870 killing process with pid 3230058 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3230058 00:12:06.870 15:48:51 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3230058 00:12:07.804 nvmf threads initialize successfully 00:12:07.804 bdev subsystem init successfully 00:12:07.804 created a nvmf target service 00:12:07.804 create targets's poll groups done 00:12:07.804 all subsystems of target started 00:12:07.804 nvmf target is running 00:12:07.804 all subsystems of target stopped 00:12:07.804 destroy targets's poll groups done 00:12:07.804 destroyed the nvmf target service 00:12:07.804 bdev subsystem finish successfully 00:12:07.804 nvmf threads destroy successfully 00:12:07.804 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:07.804 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:07.804 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:07.804 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.804 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.062 00:12:08.062 real 0m21.274s 00:12:08.062 user 0m58.242s 00:12:08.062 sys 0m5.703s 00:12:08.062 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.062 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:08.062 ************************************ 00:12:08.062 END TEST nvmf_example 00:12:08.062 ************************************ 00:12:08.062 15:48:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:08.062 15:48:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.062 15:48:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.062 15:48:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.062 ************************************ 00:12:08.062 START TEST nvmf_filesystem 00:12:08.062 ************************************ 00:12:08.062 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:12:08.062 * Looking for test storage... 00:12:08.062 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.062 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:08.062 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:08.062 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:08.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.324 --rc genhtml_branch_coverage=1 00:12:08.324 --rc genhtml_function_coverage=1 00:12:08.324 --rc genhtml_legend=1 00:12:08.324 --rc geninfo_all_blocks=1 00:12:08.324 --rc geninfo_unexecuted_blocks=1 00:12:08.324 00:12:08.324 ' 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:08.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.324 --rc genhtml_branch_coverage=1 00:12:08.324 --rc genhtml_function_coverage=1 00:12:08.324 --rc genhtml_legend=1 00:12:08.324 --rc geninfo_all_blocks=1 00:12:08.324 --rc geninfo_unexecuted_blocks=1 00:12:08.324 00:12:08.324 ' 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:08.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.324 --rc genhtml_branch_coverage=1 00:12:08.324 --rc genhtml_function_coverage=1 00:12:08.324 --rc genhtml_legend=1 00:12:08.324 --rc geninfo_all_blocks=1 00:12:08.324 --rc geninfo_unexecuted_blocks=1 00:12:08.324 00:12:08.324 ' 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:08.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.324 --rc genhtml_branch_coverage=1 00:12:08.324 --rc genhtml_function_coverage=1 00:12:08.324 --rc genhtml_legend=1 00:12:08.324 --rc geninfo_all_blocks=1 00:12:08.324 --rc geninfo_unexecuted_blocks=1 00:12:08.324 00:12:08.324 ' 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:08.324 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:08.325 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:08.326 #define SPDK_CONFIG_H 00:12:08.326 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:08.326 #define SPDK_CONFIG_APPS 1 00:12:08.326 #define SPDK_CONFIG_ARCH native 00:12:08.326 #define SPDK_CONFIG_ASAN 1 00:12:08.326 #undef SPDK_CONFIG_AVAHI 00:12:08.326 #undef SPDK_CONFIG_CET 00:12:08.326 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:08.326 #define SPDK_CONFIG_COVERAGE 1 00:12:08.326 #define SPDK_CONFIG_CROSS_PREFIX 00:12:08.326 #undef SPDK_CONFIG_CRYPTO 00:12:08.326 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:08.326 #undef SPDK_CONFIG_CUSTOMOCF 00:12:08.326 #undef SPDK_CONFIG_DAOS 00:12:08.326 #define SPDK_CONFIG_DAOS_DIR 00:12:08.326 #define SPDK_CONFIG_DEBUG 1 00:12:08.326 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:08.326 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:12:08.326 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:08.326 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:08.326 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:08.326 #undef SPDK_CONFIG_DPDK_UADK 00:12:08.326 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:12:08.326 #define SPDK_CONFIG_EXAMPLES 1 00:12:08.326 #undef SPDK_CONFIG_FC 00:12:08.326 #define SPDK_CONFIG_FC_PATH 00:12:08.326 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:08.326 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:08.326 #define SPDK_CONFIG_FSDEV 1 00:12:08.326 #undef SPDK_CONFIG_FUSE 00:12:08.326 #undef SPDK_CONFIG_FUZZER 00:12:08.326 #define SPDK_CONFIG_FUZZER_LIB 00:12:08.326 #undef SPDK_CONFIG_GOLANG 00:12:08.326 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:08.326 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:08.326 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:08.326 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:08.326 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:08.326 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:08.326 #undef SPDK_CONFIG_HAVE_LZ4 00:12:08.326 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:08.326 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:08.326 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:08.326 #define SPDK_CONFIG_IDXD 1 00:12:08.326 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:08.326 #undef SPDK_CONFIG_IPSEC_MB 00:12:08.326 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:08.326 #define SPDK_CONFIG_ISAL 1 00:12:08.326 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:08.326 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:08.326 #define SPDK_CONFIG_LIBDIR 00:12:08.326 #undef SPDK_CONFIG_LTO 00:12:08.326 #define SPDK_CONFIG_MAX_LCORES 128 00:12:08.326 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:08.326 #define SPDK_CONFIG_NVME_CUSE 1 00:12:08.326 #undef SPDK_CONFIG_OCF 00:12:08.326 #define SPDK_CONFIG_OCF_PATH 00:12:08.326 #define SPDK_CONFIG_OPENSSL_PATH 00:12:08.326 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:08.326 #define SPDK_CONFIG_PGO_DIR 00:12:08.326 #undef SPDK_CONFIG_PGO_USE 00:12:08.326 #define SPDK_CONFIG_PREFIX /usr/local 00:12:08.326 #undef SPDK_CONFIG_RAID5F 00:12:08.326 #undef SPDK_CONFIG_RBD 00:12:08.326 #define SPDK_CONFIG_RDMA 1 00:12:08.326 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:08.326 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:08.326 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:08.326 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:08.326 #define SPDK_CONFIG_SHARED 1 00:12:08.326 #undef SPDK_CONFIG_SMA 00:12:08.326 #define SPDK_CONFIG_TESTS 1 00:12:08.326 #undef SPDK_CONFIG_TSAN 00:12:08.326 #define SPDK_CONFIG_UBLK 1 00:12:08.326 #define SPDK_CONFIG_UBSAN 1 00:12:08.326 #undef SPDK_CONFIG_UNIT_TESTS 00:12:08.326 #undef SPDK_CONFIG_URING 00:12:08.326 #define SPDK_CONFIG_URING_PATH 00:12:08.326 #undef SPDK_CONFIG_URING_ZNS 00:12:08.326 #undef SPDK_CONFIG_USDT 00:12:08.326 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:08.326 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:08.326 #undef SPDK_CONFIG_VFIO_USER 00:12:08.326 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:08.326 #define SPDK_CONFIG_VHOST 1 00:12:08.326 #define SPDK_CONFIG_VIRTIO 1 00:12:08.326 #undef SPDK_CONFIG_VTUNE 00:12:08.326 #define SPDK_CONFIG_VTUNE_DIR 00:12:08.326 #define SPDK_CONFIG_WERROR 1 00:12:08.326 #define SPDK_CONFIG_WPDK_DIR 00:12:08.326 #undef SPDK_CONFIG_XNVME 00:12:08.326 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:08.326 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:08.327 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j112 00:12:08.328 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=rdma 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3232780 ]] 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3232780 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.0yNWlE 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.0yNWlE/tests/target /tmp/spdk.0yNWlE 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=55486717952 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61730598912 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6243880960 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30850502656 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865297408 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=14794752 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12323033088 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12346122240 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23089152 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30865035264 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30865301504 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=266240 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6173044736 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6173057024 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:08.329 * Looking for test storage... 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=55486717952 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:08.329 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8458473472 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.330 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:08.330 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:08.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.589 --rc genhtml_branch_coverage=1 00:12:08.589 --rc genhtml_function_coverage=1 00:12:08.589 --rc genhtml_legend=1 00:12:08.589 --rc geninfo_all_blocks=1 00:12:08.589 --rc geninfo_unexecuted_blocks=1 00:12:08.589 00:12:08.589 ' 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:08.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.589 --rc genhtml_branch_coverage=1 00:12:08.589 --rc genhtml_function_coverage=1 00:12:08.589 --rc genhtml_legend=1 00:12:08.589 --rc geninfo_all_blocks=1 00:12:08.589 --rc geninfo_unexecuted_blocks=1 00:12:08.589 00:12:08.589 ' 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:08.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.589 --rc genhtml_branch_coverage=1 00:12:08.589 --rc genhtml_function_coverage=1 00:12:08.589 --rc genhtml_legend=1 00:12:08.589 --rc geninfo_all_blocks=1 00:12:08.589 --rc geninfo_unexecuted_blocks=1 00:12:08.589 00:12:08.589 ' 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:08.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.589 --rc genhtml_branch_coverage=1 00:12:08.589 --rc genhtml_function_coverage=1 00:12:08.589 --rc genhtml_legend=1 00:12:08.589 --rc geninfo_all_blocks=1 00:12:08.589 --rc geninfo_unexecuted_blocks=1 00:12:08.589 00:12:08.589 ' 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.589 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.590 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.590 15:48:53 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:16.729 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:16.729 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:16.730 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:16.730 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:16.730 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # rdma_device_init 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:16.730 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:16.730 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:16.730 altname enp217s0f0np0 00:12:16.730 altname ens818f0np0 00:12:16.730 inet 192.168.100.8/24 scope global mlx_0_0 00:12:16.730 valid_lft forever preferred_lft forever 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:16.730 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:16.730 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:16.730 altname enp217s0f1np1 00:12:16.730 altname ens818f1np1 00:12:16.730 inet 192.168.100.9/24 scope global mlx_0_1 00:12:16.730 valid_lft forever preferred_lft forever 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:16.730 15:49:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:16.730 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:16.731 192.168.100.9' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:16.731 192.168.100.9' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # head -n 1 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:16.731 192.168.100.9' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # tail -n +2 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # head -n 1 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:16.731 ************************************ 00:12:16.731 START TEST nvmf_filesystem_no_in_capsule 00:12:16.731 ************************************ 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3236111 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3236111 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3236111 ']' 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.731 15:49:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.731 [2024-11-17 15:49:01.249092] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:16.731 [2024-11-17 15:49:01.249188] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.731 [2024-11-17 15:49:01.384150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.731 [2024-11-17 15:49:01.485751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.731 [2024-11-17 15:49:01.485798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.731 [2024-11-17 15:49:01.485810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.731 [2024-11-17 15:49:01.485823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.731 [2024-11-17 15:49:01.485832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.731 [2024-11-17 15:49:01.488541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.731 [2024-11-17 15:49:01.488621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.731 [2024-11-17 15:49:01.488648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.731 [2024-11-17 15:49:01.488656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.731 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.731 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:16.731 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:16.731 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.731 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.731 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.731 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:16.731 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:16.731 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.731 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:16.731 [2024-11-17 15:49:02.097849] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:16.731 [2024-11-17 15:49:02.122029] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7efca7fbd940) succeed. 00:12:16.731 [2024-11-17 15:49:02.131397] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7efca7f79940) succeed. 00:12:16.990 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.990 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:16.990 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.990 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.559 Malloc1 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.559 [2024-11-17 15:49:02.855716] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:17.559 { 00:12:17.559 "name": "Malloc1", 00:12:17.559 "aliases": [ 00:12:17.559 "975a33b0-5182-44ad-b588-08257e354576" 00:12:17.559 ], 00:12:17.559 "product_name": "Malloc disk", 00:12:17.559 "block_size": 512, 00:12:17.559 "num_blocks": 1048576, 00:12:17.559 "uuid": "975a33b0-5182-44ad-b588-08257e354576", 00:12:17.559 "assigned_rate_limits": { 00:12:17.559 "rw_ios_per_sec": 0, 00:12:17.559 "rw_mbytes_per_sec": 0, 00:12:17.559 "r_mbytes_per_sec": 0, 00:12:17.559 "w_mbytes_per_sec": 0 00:12:17.559 }, 00:12:17.559 "claimed": true, 00:12:17.559 "claim_type": "exclusive_write", 00:12:17.559 "zoned": false, 00:12:17.559 "supported_io_types": { 00:12:17.559 "read": true, 00:12:17.559 "write": true, 00:12:17.559 "unmap": true, 00:12:17.559 "flush": true, 00:12:17.559 "reset": true, 00:12:17.559 "nvme_admin": false, 00:12:17.559 "nvme_io": false, 00:12:17.559 "nvme_io_md": false, 00:12:17.559 "write_zeroes": true, 00:12:17.559 "zcopy": true, 00:12:17.559 "get_zone_info": false, 00:12:17.559 "zone_management": false, 00:12:17.559 "zone_append": false, 00:12:17.559 "compare": false, 00:12:17.559 "compare_and_write": false, 00:12:17.559 "abort": true, 00:12:17.559 "seek_hole": false, 00:12:17.559 "seek_data": false, 00:12:17.559 "copy": true, 00:12:17.559 "nvme_iov_md": false 00:12:17.559 }, 00:12:17.559 "memory_domains": [ 00:12:17.559 { 00:12:17.559 "dma_device_id": "system", 00:12:17.559 "dma_device_type": 1 00:12:17.559 }, 00:12:17.559 { 00:12:17.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.559 "dma_device_type": 2 00:12:17.559 } 00:12:17.559 ], 00:12:17.559 "driver_specific": {} 00:12:17.559 } 00:12:17.559 ]' 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:17.559 15:49:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:18.495 15:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.495 15:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:18.495 15:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.495 15:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:18.495 15:49:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:20.526 15:49:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:20.526 15:49:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:20.526 15:49:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.526 15:49:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:20.526 15:49:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.526 15:49:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:20.526 15:49:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:20.526 15:49:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:20.526 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:20.526 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:20.526 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:20.526 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:20.526 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:20.526 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:20.526 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:20.526 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:20.526 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:20.526 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:20.786 15:49:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.723 ************************************ 00:12:21.723 START TEST filesystem_ext4 00:12:21.723 ************************************ 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:21.723 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:21.724 mke2fs 1.47.0 (5-Feb-2023) 00:12:21.983 Discarding device blocks: 0/522240 done 00:12:21.983 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:21.983 Filesystem UUID: a645f6da-b30d-4a7f-b709-207ae6b0c055 00:12:21.983 Superblock backups stored on blocks: 00:12:21.983 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:21.983 00:12:21.983 Allocating group tables: 0/64 done 00:12:21.983 Writing inode tables: 0/64 done 00:12:21.983 Creating journal (8192 blocks): done 00:12:21.983 Writing superblocks and filesystem accounting information: 0/64 done 00:12:21.983 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3236111 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.983 00:12:21.983 real 0m0.210s 00:12:21.983 user 0m0.030s 00:12:21.983 sys 0m0.076s 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:21.983 ************************************ 00:12:21.983 END TEST filesystem_ext4 00:12:21.983 ************************************ 00:12:21.983 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:21.984 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.984 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.984 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.244 ************************************ 00:12:22.244 START TEST filesystem_btrfs 00:12:22.244 ************************************ 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:22.244 btrfs-progs v6.8.1 00:12:22.244 See https://btrfs.readthedocs.io for more information. 00:12:22.244 00:12:22.244 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:22.244 NOTE: several default settings have changed in version 5.15, please make sure 00:12:22.244 this does not affect your deployments: 00:12:22.244 - DUP for metadata (-m dup) 00:12:22.244 - enabled no-holes (-O no-holes) 00:12:22.244 - enabled free-space-tree (-R free-space-tree) 00:12:22.244 00:12:22.244 Label: (null) 00:12:22.244 UUID: 8ec6fd43-6d0b-4abb-a12b-c9d714ee80a9 00:12:22.244 Node size: 16384 00:12:22.244 Sector size: 4096 (CPU page size: 4096) 00:12:22.244 Filesystem size: 510.00MiB 00:12:22.244 Block group profiles: 00:12:22.244 Data: single 8.00MiB 00:12:22.244 Metadata: DUP 32.00MiB 00:12:22.244 System: DUP 8.00MiB 00:12:22.244 SSD detected: yes 00:12:22.244 Zoned device: no 00:12:22.244 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:22.244 Checksum: crc32c 00:12:22.244 Number of devices: 1 00:12:22.244 Devices: 00:12:22.244 ID SIZE PATH 00:12:22.244 1 510.00MiB /dev/nvme0n1p1 00:12:22.244 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3236111 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:22.244 00:12:22.244 real 0m0.255s 00:12:22.244 user 0m0.034s 00:12:22.244 sys 0m0.126s 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.244 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:22.244 ************************************ 00:12:22.244 END TEST filesystem_btrfs 00:12:22.244 ************************************ 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.504 ************************************ 00:12:22.504 START TEST filesystem_xfs 00:12:22.504 ************************************ 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:22.504 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:22.504 = sectsz=512 attr=2, projid32bit=1 00:12:22.504 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:22.504 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:22.504 data = bsize=4096 blocks=130560, imaxpct=25 00:12:22.504 = sunit=0 swidth=0 blks 00:12:22.504 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:22.504 log =internal log bsize=4096 blocks=16384, version=2 00:12:22.504 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:22.504 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:22.504 Discarding blocks...Done. 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:22.504 15:49:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:22.505 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:22.505 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:22.505 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:22.505 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:22.505 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:22.505 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:22.763 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3236111 00:12:22.763 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:22.763 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:22.763 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:22.763 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:22.763 00:12:22.763 real 0m0.216s 00:12:22.763 user 0m0.023s 00:12:22.763 sys 0m0.086s 00:12:22.763 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:22.763 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:22.763 ************************************ 00:12:22.763 END TEST filesystem_xfs 00:12:22.763 ************************************ 00:12:22.763 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:22.763 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:22.763 15:49:08 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3236111 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3236111 ']' 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3236111 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3236111 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3236111' 00:12:23.700 killing process with pid 3236111 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3236111 00:12:23.700 15:49:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3236111 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:26.991 00:12:26.991 real 0m10.762s 00:12:26.991 user 0m40.423s 00:12:26.991 sys 0m1.482s 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.991 ************************************ 00:12:26.991 END TEST nvmf_filesystem_no_in_capsule 00:12:26.991 ************************************ 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.991 ************************************ 00:12:26.991 START TEST nvmf_filesystem_in_capsule 00:12:26.991 ************************************ 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.991 15:49:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.992 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3238128 00:12:26.992 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.992 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3238128 00:12:26.992 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3238128 ']' 00:12:26.992 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.992 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.992 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.992 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.992 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.992 [2024-11-17 15:49:12.097371] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:26.992 [2024-11-17 15:49:12.097469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.992 [2024-11-17 15:49:12.232703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.992 [2024-11-17 15:49:12.332802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.992 [2024-11-17 15:49:12.332849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.992 [2024-11-17 15:49:12.332861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.992 [2024-11-17 15:49:12.332874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.992 [2024-11-17 15:49:12.332886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.992 [2024-11-17 15:49:12.335302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.992 [2024-11-17 15:49:12.335377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.992 [2024-11-17 15:49:12.335435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.992 [2024-11-17 15:49:12.335443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.560 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.560 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:27.560 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.560 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.560 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.560 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.560 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:27.560 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:12:27.560 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.560 15:49:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:27.560 [2024-11-17 15:49:12.972150] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f17de9bd940) succeed. 00:12:27.560 [2024-11-17 15:49:12.981461] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f17de979940) succeed. 00:12:27.819 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.819 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:27.819 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.819 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.387 Malloc1 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.387 [2024-11-17 15:49:13.775916] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.387 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:28.387 { 00:12:28.387 "name": "Malloc1", 00:12:28.387 "aliases": [ 00:12:28.387 "ab19ac02-3ffc-412c-ae21-a5d76c419ca9" 00:12:28.387 ], 00:12:28.387 "product_name": "Malloc disk", 00:12:28.387 "block_size": 512, 00:12:28.387 "num_blocks": 1048576, 00:12:28.387 "uuid": "ab19ac02-3ffc-412c-ae21-a5d76c419ca9", 00:12:28.387 "assigned_rate_limits": { 00:12:28.387 "rw_ios_per_sec": 0, 00:12:28.387 "rw_mbytes_per_sec": 0, 00:12:28.387 "r_mbytes_per_sec": 0, 00:12:28.387 "w_mbytes_per_sec": 0 00:12:28.387 }, 00:12:28.387 "claimed": true, 00:12:28.387 "claim_type": "exclusive_write", 00:12:28.387 "zoned": false, 00:12:28.387 "supported_io_types": { 00:12:28.387 "read": true, 00:12:28.387 "write": true, 00:12:28.387 "unmap": true, 00:12:28.387 "flush": true, 00:12:28.387 "reset": true, 00:12:28.387 "nvme_admin": false, 00:12:28.387 "nvme_io": false, 00:12:28.388 "nvme_io_md": false, 00:12:28.388 "write_zeroes": true, 00:12:28.388 "zcopy": true, 00:12:28.388 "get_zone_info": false, 00:12:28.388 "zone_management": false, 00:12:28.388 "zone_append": false, 00:12:28.388 "compare": false, 00:12:28.388 "compare_and_write": false, 00:12:28.388 "abort": true, 00:12:28.388 "seek_hole": false, 00:12:28.388 "seek_data": false, 00:12:28.388 "copy": true, 00:12:28.388 "nvme_iov_md": false 00:12:28.388 }, 00:12:28.388 "memory_domains": [ 00:12:28.388 { 00:12:28.388 "dma_device_id": "system", 00:12:28.388 "dma_device_type": 1 00:12:28.388 }, 00:12:28.388 { 00:12:28.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.388 "dma_device_type": 2 00:12:28.388 } 00:12:28.388 ], 00:12:28.388 "driver_specific": {} 00:12:28.388 } 00:12:28.388 ]' 00:12:28.388 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:28.388 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:28.388 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:28.388 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:28.388 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:28.388 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:28.388 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:28.388 15:49:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:29.765 15:49:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.765 15:49:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:29.765 15:49:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.765 15:49:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:29.765 15:49:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:31.667 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:31.668 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:31.668 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:31.668 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:31.668 15:49:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:31.668 15:49:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.605 ************************************ 00:12:32.605 START TEST filesystem_in_capsule_ext4 00:12:32.605 ************************************ 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:32.605 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:32.605 mke2fs 1.47.0 (5-Feb-2023) 00:12:32.865 Discarding device blocks: 0/522240 done 00:12:32.865 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:32.865 Filesystem UUID: 91ccaf60-1800-427a-b515-4f72b11ab866 00:12:32.865 Superblock backups stored on blocks: 00:12:32.865 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:32.865 00:12:32.865 Allocating group tables: 0/64 done 00:12:32.865 Writing inode tables: 0/64 done 00:12:32.865 Creating journal (8192 blocks): done 00:12:32.865 Writing superblocks and filesystem accounting information: 0/64 done 00:12:32.865 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3238128 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:32.865 00:12:32.865 real 0m0.207s 00:12:32.865 user 0m0.034s 00:12:32.865 sys 0m0.074s 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:32.865 ************************************ 00:12:32.865 END TEST filesystem_in_capsule_ext4 00:12:32.865 ************************************ 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:32.865 ************************************ 00:12:32.865 START TEST filesystem_in_capsule_btrfs 00:12:32.865 ************************************ 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:32.865 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:33.125 btrfs-progs v6.8.1 00:12:33.125 See https://btrfs.readthedocs.io for more information. 00:12:33.125 00:12:33.125 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:33.125 NOTE: several default settings have changed in version 5.15, please make sure 00:12:33.125 this does not affect your deployments: 00:12:33.125 - DUP for metadata (-m dup) 00:12:33.125 - enabled no-holes (-O no-holes) 00:12:33.125 - enabled free-space-tree (-R free-space-tree) 00:12:33.125 00:12:33.125 Label: (null) 00:12:33.125 UUID: 4a491d39-f57b-4273-9dcd-5368d7be71bc 00:12:33.125 Node size: 16384 00:12:33.125 Sector size: 4096 (CPU page size: 4096) 00:12:33.125 Filesystem size: 510.00MiB 00:12:33.125 Block group profiles: 00:12:33.125 Data: single 8.00MiB 00:12:33.125 Metadata: DUP 32.00MiB 00:12:33.125 System: DUP 8.00MiB 00:12:33.125 SSD detected: yes 00:12:33.125 Zoned device: no 00:12:33.125 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:33.125 Checksum: crc32c 00:12:33.125 Number of devices: 1 00:12:33.125 Devices: 00:12:33.125 ID SIZE PATH 00:12:33.125 1 510.00MiB /dev/nvme0n1p1 00:12:33.125 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3238128 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.125 00:12:33.125 real 0m0.258s 00:12:33.125 user 0m0.028s 00:12:33.125 sys 0m0.132s 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.125 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:33.125 ************************************ 00:12:33.125 END TEST filesystem_in_capsule_btrfs 00:12:33.125 ************************************ 00:12:33.384 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:33.384 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:33.384 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:33.384 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:33.385 ************************************ 00:12:33.385 START TEST filesystem_in_capsule_xfs 00:12:33.385 ************************************ 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:33.385 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:33.385 = sectsz=512 attr=2, projid32bit=1 00:12:33.385 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:33.385 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:33.385 data = bsize=4096 blocks=130560, imaxpct=25 00:12:33.385 = sunit=0 swidth=0 blks 00:12:33.385 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:33.385 log =internal log bsize=4096 blocks=16384, version=2 00:12:33.385 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:33.385 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:33.385 Discarding blocks...Done. 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:33.385 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:33.644 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3238128 00:12:33.644 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:33.645 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:33.645 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:33.645 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:33.645 00:12:33.645 real 0m0.213s 00:12:33.645 user 0m0.023s 00:12:33.645 sys 0m0.080s 00:12:33.645 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:33.645 15:49:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:33.645 ************************************ 00:12:33.645 END TEST filesystem_in_capsule_xfs 00:12:33.645 ************************************ 00:12:33.645 15:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:33.645 15:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:33.645 15:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.580 15:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:34.580 15:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:34.580 15:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:34.580 15:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.580 15:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:34.580 15:49:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:34.580 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:34.580 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.580 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.580 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:34.580 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.580 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3238128 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3238128 ']' 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3238128 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3238128 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3238128' 00:12:34.581 killing process with pid 3238128 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3238128 00:12:34.581 15:49:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3238128 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:37.889 00:12:37.889 real 0m11.133s 00:12:37.889 user 0m41.401s 00:12:37.889 sys 0m1.441s 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:37.889 ************************************ 00:12:37.889 END TEST nvmf_filesystem_in_capsule 00:12:37.889 ************************************ 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:37.889 rmmod nvme_rdma 00:12:37.889 rmmod nvme_fabrics 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.889 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:37.890 00:12:37.890 real 0m29.768s 00:12:37.890 user 1m24.170s 00:12:37.890 sys 0m8.688s 00:12:37.890 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:37.890 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.890 ************************************ 00:12:37.890 END TEST nvmf_filesystem 00:12:37.890 ************************************ 00:12:37.890 15:49:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:37.890 15:49:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:37.890 15:49:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:37.890 15:49:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.890 ************************************ 00:12:37.890 START TEST nvmf_target_discovery 00:12:37.890 ************************************ 00:12:37.890 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:12:37.890 * Looking for test storage... 00:12:37.890 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:37.890 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:37.890 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:37.890 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:38.149 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:38.149 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.149 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.149 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.149 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.149 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.149 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.149 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:38.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.150 --rc genhtml_branch_coverage=1 00:12:38.150 --rc genhtml_function_coverage=1 00:12:38.150 --rc genhtml_legend=1 00:12:38.150 --rc geninfo_all_blocks=1 00:12:38.150 --rc geninfo_unexecuted_blocks=1 00:12:38.150 00:12:38.150 ' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:38.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.150 --rc genhtml_branch_coverage=1 00:12:38.150 --rc genhtml_function_coverage=1 00:12:38.150 --rc genhtml_legend=1 00:12:38.150 --rc geninfo_all_blocks=1 00:12:38.150 --rc geninfo_unexecuted_blocks=1 00:12:38.150 00:12:38.150 ' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:38.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.150 --rc genhtml_branch_coverage=1 00:12:38.150 --rc genhtml_function_coverage=1 00:12:38.150 --rc genhtml_legend=1 00:12:38.150 --rc geninfo_all_blocks=1 00:12:38.150 --rc geninfo_unexecuted_blocks=1 00:12:38.150 00:12:38.150 ' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:38.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.150 --rc genhtml_branch_coverage=1 00:12:38.150 --rc genhtml_function_coverage=1 00:12:38.150 --rc genhtml_legend=1 00:12:38.150 --rc geninfo_all_blocks=1 00:12:38.150 --rc geninfo_unexecuted_blocks=1 00:12:38.150 00:12:38.150 ' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.150 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:38.150 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:38.151 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:38.151 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.151 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.151 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.151 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:38.151 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:38.151 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:38.151 15:49:23 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:44.725 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:44.725 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:44.725 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:44.726 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:44.726 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # rdma_device_init 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:44.726 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:44.726 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:44.726 altname enp217s0f0np0 00:12:44.726 altname ens818f0np0 00:12:44.726 inet 192.168.100.8/24 scope global mlx_0_0 00:12:44.726 valid_lft forever preferred_lft forever 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:44.726 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:44.726 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:44.726 altname enp217s0f1np1 00:12:44.726 altname ens818f1np1 00:12:44.726 inet 192.168.100.9/24 scope global mlx_0_1 00:12:44.726 valid_lft forever preferred_lft forever 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:44.726 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:44.727 192.168.100.9' 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:44.727 192.168.100.9' 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # head -n 1 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:44.727 192.168.100.9' 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # head -n 1 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # tail -n +2 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3243532 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3243532 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3243532 ']' 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.727 15:49:29 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:44.727 [2024-11-17 15:49:29.877590] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:44.727 [2024-11-17 15:49:29.877702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.727 [2024-11-17 15:49:30.016015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.727 [2024-11-17 15:49:30.123361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.727 [2024-11-17 15:49:30.123412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.727 [2024-11-17 15:49:30.123425] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.727 [2024-11-17 15:49:30.123439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.727 [2024-11-17 15:49:30.123449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.727 [2024-11-17 15:49:30.125909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.727 [2024-11-17 15:49:30.125939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.727 [2024-11-17 15:49:30.125999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.727 [2024-11-17 15:49:30.126007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.296 15:49:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.296 15:49:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:45.296 15:49:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.296 15:49:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:45.296 15:49:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.296 15:49:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.296 15:49:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:45.296 15:49:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.296 15:49:30 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.296 [2024-11-17 15:49:30.742812] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f3f74784940) succeed. 00:12:45.296 [2024-11-17 15:49:30.752292] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f3f74740940) succeed. 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.556 Null1 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.556 [2024-11-17 15:49:31.072538] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.556 Null2 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.556 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 Null3 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 Null4 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.816 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:12:45.816 00:12:45.816 Discovery Log Number of Records 6, Generation counter 6 00:12:45.816 =====Discovery Log Entry 0====== 00:12:45.816 trtype: rdma 00:12:45.816 adrfam: ipv4 00:12:45.816 subtype: current discovery subsystem 00:12:45.816 treq: not required 00:12:45.816 portid: 0 00:12:45.816 trsvcid: 4420 00:12:45.816 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:45.816 traddr: 192.168.100.8 00:12:45.816 eflags: explicit discovery connections, duplicate discovery information 00:12:45.816 rdma_prtype: not specified 00:12:45.816 rdma_qptype: connected 00:12:45.816 rdma_cms: rdma-cm 00:12:45.816 rdma_pkey: 0x0000 00:12:45.816 =====Discovery Log Entry 1====== 00:12:45.816 trtype: rdma 00:12:45.816 adrfam: ipv4 00:12:45.816 subtype: nvme subsystem 00:12:45.816 treq: not required 00:12:45.816 portid: 0 00:12:45.816 trsvcid: 4420 00:12:45.816 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:45.816 traddr: 192.168.100.8 00:12:45.816 eflags: none 00:12:45.816 rdma_prtype: not specified 00:12:45.816 rdma_qptype: connected 00:12:45.816 rdma_cms: rdma-cm 00:12:45.816 rdma_pkey: 0x0000 00:12:45.816 =====Discovery Log Entry 2====== 00:12:45.816 trtype: rdma 00:12:45.816 adrfam: ipv4 00:12:45.816 subtype: nvme subsystem 00:12:45.816 treq: not required 00:12:45.816 portid: 0 00:12:45.816 trsvcid: 4420 00:12:45.816 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:45.816 traddr: 192.168.100.8 00:12:45.816 eflags: none 00:12:45.816 rdma_prtype: not specified 00:12:45.816 rdma_qptype: connected 00:12:45.816 rdma_cms: rdma-cm 00:12:45.816 rdma_pkey: 0x0000 00:12:45.816 =====Discovery Log Entry 3====== 00:12:45.816 trtype: rdma 00:12:45.816 adrfam: ipv4 00:12:45.816 subtype: nvme subsystem 00:12:45.816 treq: not required 00:12:45.816 portid: 0 00:12:45.816 trsvcid: 4420 00:12:45.816 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:45.816 traddr: 192.168.100.8 00:12:45.816 eflags: none 00:12:45.816 rdma_prtype: not specified 00:12:45.816 rdma_qptype: connected 00:12:45.816 rdma_cms: rdma-cm 00:12:45.816 rdma_pkey: 0x0000 00:12:45.816 =====Discovery Log Entry 4====== 00:12:45.816 trtype: rdma 00:12:45.816 adrfam: ipv4 00:12:45.816 subtype: nvme subsystem 00:12:45.816 treq: not required 00:12:45.816 portid: 0 00:12:45.816 trsvcid: 4420 00:12:45.817 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:45.817 traddr: 192.168.100.8 00:12:45.817 eflags: none 00:12:45.817 rdma_prtype: not specified 00:12:45.817 rdma_qptype: connected 00:12:45.817 rdma_cms: rdma-cm 00:12:45.817 rdma_pkey: 0x0000 00:12:45.817 =====Discovery Log Entry 5====== 00:12:45.817 trtype: rdma 00:12:45.817 adrfam: ipv4 00:12:45.817 subtype: discovery subsystem referral 00:12:45.817 treq: not required 00:12:45.817 portid: 0 00:12:45.817 trsvcid: 4430 00:12:45.817 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:45.817 traddr: 192.168.100.8 00:12:45.817 eflags: none 00:12:45.817 rdma_prtype: unrecognized 00:12:45.817 rdma_qptype: unrecognized 00:12:45.817 rdma_cms: unrecognized 00:12:45.817 rdma_pkey: 0x0000 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:45.817 Perform nvmf subsystem discovery via RPC 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.817 [ 00:12:45.817 { 00:12:45.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:45.817 "subtype": "Discovery", 00:12:45.817 "listen_addresses": [ 00:12:45.817 { 00:12:45.817 "trtype": "RDMA", 00:12:45.817 "adrfam": "IPv4", 00:12:45.817 "traddr": "192.168.100.8", 00:12:45.817 "trsvcid": "4420" 00:12:45.817 } 00:12:45.817 ], 00:12:45.817 "allow_any_host": true, 00:12:45.817 "hosts": [] 00:12:45.817 }, 00:12:45.817 { 00:12:45.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:45.817 "subtype": "NVMe", 00:12:45.817 "listen_addresses": [ 00:12:45.817 { 00:12:45.817 "trtype": "RDMA", 00:12:45.817 "adrfam": "IPv4", 00:12:45.817 "traddr": "192.168.100.8", 00:12:45.817 "trsvcid": "4420" 00:12:45.817 } 00:12:45.817 ], 00:12:45.817 "allow_any_host": true, 00:12:45.817 "hosts": [], 00:12:45.817 "serial_number": "SPDK00000000000001", 00:12:45.817 "model_number": "SPDK bdev Controller", 00:12:45.817 "max_namespaces": 32, 00:12:45.817 "min_cntlid": 1, 00:12:45.817 "max_cntlid": 65519, 00:12:45.817 "namespaces": [ 00:12:45.817 { 00:12:45.817 "nsid": 1, 00:12:45.817 "bdev_name": "Null1", 00:12:45.817 "name": "Null1", 00:12:45.817 "nguid": "A15948B26B4F43B499FF7042E0C03B9C", 00:12:45.817 "uuid": "a15948b2-6b4f-43b4-99ff-7042e0c03b9c" 00:12:45.817 } 00:12:45.817 ] 00:12:45.817 }, 00:12:45.817 { 00:12:45.817 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:45.817 "subtype": "NVMe", 00:12:45.817 "listen_addresses": [ 00:12:45.817 { 00:12:45.817 "trtype": "RDMA", 00:12:45.817 "adrfam": "IPv4", 00:12:45.817 "traddr": "192.168.100.8", 00:12:45.817 "trsvcid": "4420" 00:12:45.817 } 00:12:45.817 ], 00:12:45.817 "allow_any_host": true, 00:12:45.817 "hosts": [], 00:12:45.817 "serial_number": "SPDK00000000000002", 00:12:45.817 "model_number": "SPDK bdev Controller", 00:12:45.817 "max_namespaces": 32, 00:12:45.817 "min_cntlid": 1, 00:12:45.817 "max_cntlid": 65519, 00:12:45.817 "namespaces": [ 00:12:45.817 { 00:12:45.817 "nsid": 1, 00:12:45.817 "bdev_name": "Null2", 00:12:45.817 "name": "Null2", 00:12:45.817 "nguid": "B4A659C4591747FB8FC2C8AF6A7E4D7C", 00:12:45.817 "uuid": "b4a659c4-5917-47fb-8fc2-c8af6a7e4d7c" 00:12:45.817 } 00:12:45.817 ] 00:12:45.817 }, 00:12:45.817 { 00:12:45.817 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:45.817 "subtype": "NVMe", 00:12:45.817 "listen_addresses": [ 00:12:45.817 { 00:12:45.817 "trtype": "RDMA", 00:12:45.817 "adrfam": "IPv4", 00:12:45.817 "traddr": "192.168.100.8", 00:12:45.817 "trsvcid": "4420" 00:12:45.817 } 00:12:45.817 ], 00:12:45.817 "allow_any_host": true, 00:12:45.817 "hosts": [], 00:12:45.817 "serial_number": "SPDK00000000000003", 00:12:45.817 "model_number": "SPDK bdev Controller", 00:12:45.817 "max_namespaces": 32, 00:12:45.817 "min_cntlid": 1, 00:12:45.817 "max_cntlid": 65519, 00:12:45.817 "namespaces": [ 00:12:45.817 { 00:12:45.817 "nsid": 1, 00:12:45.817 "bdev_name": "Null3", 00:12:45.817 "name": "Null3", 00:12:45.817 "nguid": "04A47C8744BF42BCA97DA2F5B6C1BD15", 00:12:45.817 "uuid": "04a47c87-44bf-42bc-a97d-a2f5b6c1bd15" 00:12:45.817 } 00:12:45.817 ] 00:12:45.817 }, 00:12:45.817 { 00:12:45.817 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:45.817 "subtype": "NVMe", 00:12:45.817 "listen_addresses": [ 00:12:45.817 { 00:12:45.817 "trtype": "RDMA", 00:12:45.817 "adrfam": "IPv4", 00:12:45.817 "traddr": "192.168.100.8", 00:12:45.817 "trsvcid": "4420" 00:12:45.817 } 00:12:45.817 ], 00:12:45.817 "allow_any_host": true, 00:12:45.817 "hosts": [], 00:12:45.817 "serial_number": "SPDK00000000000004", 00:12:45.817 "model_number": "SPDK bdev Controller", 00:12:45.817 "max_namespaces": 32, 00:12:45.817 "min_cntlid": 1, 00:12:45.817 "max_cntlid": 65519, 00:12:45.817 "namespaces": [ 00:12:45.817 { 00:12:45.817 "nsid": 1, 00:12:45.817 "bdev_name": "Null4", 00:12:45.817 "name": "Null4", 00:12:45.817 "nguid": "F014ACFD72D24594964970A9DFCC6C71", 00:12:45.817 "uuid": "f014acfd-72d2-4594-9649-70a9dfcc6c71" 00:12:45.817 } 00:12:45.817 ] 00:12:45.817 } 00:12:45.817 ] 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:45.817 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:46.076 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:46.077 rmmod nvme_rdma 00:12:46.077 rmmod nvme_fabrics 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3243532 ']' 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3243532 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3243532 ']' 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3243532 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3243532 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3243532' 00:12:46.077 killing process with pid 3243532 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3243532 00:12:46.077 15:49:31 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3243532 00:12:47.983 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.983 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:47.983 00:12:47.984 real 0m9.907s 00:12:47.984 user 0m12.464s 00:12:47.984 sys 0m5.412s 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:47.984 ************************************ 00:12:47.984 END TEST nvmf_target_discovery 00:12:47.984 ************************************ 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.984 ************************************ 00:12:47.984 START TEST nvmf_referrals 00:12:47.984 ************************************ 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:12:47.984 * Looking for test storage... 00:12:47.984 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:47.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.984 --rc genhtml_branch_coverage=1 00:12:47.984 --rc genhtml_function_coverage=1 00:12:47.984 --rc genhtml_legend=1 00:12:47.984 --rc geninfo_all_blocks=1 00:12:47.984 --rc geninfo_unexecuted_blocks=1 00:12:47.984 00:12:47.984 ' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:47.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.984 --rc genhtml_branch_coverage=1 00:12:47.984 --rc genhtml_function_coverage=1 00:12:47.984 --rc genhtml_legend=1 00:12:47.984 --rc geninfo_all_blocks=1 00:12:47.984 --rc geninfo_unexecuted_blocks=1 00:12:47.984 00:12:47.984 ' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:47.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.984 --rc genhtml_branch_coverage=1 00:12:47.984 --rc genhtml_function_coverage=1 00:12:47.984 --rc genhtml_legend=1 00:12:47.984 --rc geninfo_all_blocks=1 00:12:47.984 --rc geninfo_unexecuted_blocks=1 00:12:47.984 00:12:47.984 ' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:47.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.984 --rc genhtml_branch_coverage=1 00:12:47.984 --rc genhtml_function_coverage=1 00:12:47.984 --rc genhtml_legend=1 00:12:47.984 --rc geninfo_all_blocks=1 00:12:47.984 --rc geninfo_unexecuted_blocks=1 00:12:47.984 00:12:47.984 ' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.984 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.984 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.985 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.985 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:47.985 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:47.985 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:47.985 15:49:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.554 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.554 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.554 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:12:54.555 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:12:54.555 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:12:54.555 Found net devices under 0000:d9:00.0: mlx_0_0 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:12:54.555 Found net devices under 0000:d9:00.1: mlx_0_1 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # rdma_device_init 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@530 -- # allocate_nic_ips 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:54.555 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:54.556 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:54.556 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:12:54.556 altname enp217s0f0np0 00:12:54.556 altname ens818f0np0 00:12:54.556 inet 192.168.100.8/24 scope global mlx_0_0 00:12:54.556 valid_lft forever preferred_lft forever 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:54.556 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:54.556 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:12:54.556 altname enp217s0f1np1 00:12:54.556 altname ens818f1np1 00:12:54.556 inet 192.168.100.9/24 scope global mlx_0_1 00:12:54.556 valid_lft forever preferred_lft forever 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:12:54.556 192.168.100.9' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:12:54.556 192.168.100.9' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # head -n 1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:12:54.556 192.168.100.9' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # tail -n +2 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # head -n 1 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3247269 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3247269 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3247269 ']' 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.556 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.557 15:49:39 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:54.557 [2024-11-17 15:49:39.673382] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:54.557 [2024-11-17 15:49:39.673473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.557 [2024-11-17 15:49:39.803978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.557 [2024-11-17 15:49:39.904283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.557 [2024-11-17 15:49:39.904329] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.557 [2024-11-17 15:49:39.904341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.557 [2024-11-17 15:49:39.904353] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.557 [2024-11-17 15:49:39.904364] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.557 [2024-11-17 15:49:39.906678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.557 [2024-11-17 15:49:39.906753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.557 [2024-11-17 15:49:39.906812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.557 [2024-11-17 15:49:39.906821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.125 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.125 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:55.125 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.125 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.125 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.125 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.125 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:55.125 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.125 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.125 [2024-11-17 15:49:40.570931] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fa5a3f53940) succeed. 00:12:55.125 [2024-11-17 15:49:40.580335] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fa5a3f0e940) succeed. 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.384 [2024-11-17 15:49:40.862106] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.384 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:55.644 15:49:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:55.644 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:55.903 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:55.904 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:55.904 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:55.904 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.163 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:56.423 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.682 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:56.682 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:56.682 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:56.682 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:56.682 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.682 15:49:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:56.682 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:56.942 rmmod nvme_rdma 00:12:56.942 rmmod nvme_fabrics 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3247269 ']' 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3247269 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3247269 ']' 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3247269 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3247269 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3247269' 00:12:56.942 killing process with pid 3247269 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3247269 00:12:56.942 15:49:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3247269 00:12:58.850 15:49:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.850 15:49:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:12:58.850 00:12:58.850 real 0m10.695s 00:12:58.850 user 0m17.069s 00:12:58.850 sys 0m5.704s 00:12:58.850 15:49:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:58.851 15:49:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:58.851 ************************************ 00:12:58.851 END TEST nvmf_referrals 00:12:58.851 ************************************ 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.851 ************************************ 00:12:58.851 START TEST nvmf_connect_disconnect 00:12:58.851 ************************************ 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:12:58.851 * Looking for test storage... 00:12:58.851 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:58.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.851 --rc genhtml_branch_coverage=1 00:12:58.851 --rc genhtml_function_coverage=1 00:12:58.851 --rc genhtml_legend=1 00:12:58.851 --rc geninfo_all_blocks=1 00:12:58.851 --rc geninfo_unexecuted_blocks=1 00:12:58.851 00:12:58.851 ' 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:58.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.851 --rc genhtml_branch_coverage=1 00:12:58.851 --rc genhtml_function_coverage=1 00:12:58.851 --rc genhtml_legend=1 00:12:58.851 --rc geninfo_all_blocks=1 00:12:58.851 --rc geninfo_unexecuted_blocks=1 00:12:58.851 00:12:58.851 ' 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:58.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.851 --rc genhtml_branch_coverage=1 00:12:58.851 --rc genhtml_function_coverage=1 00:12:58.851 --rc genhtml_legend=1 00:12:58.851 --rc geninfo_all_blocks=1 00:12:58.851 --rc geninfo_unexecuted_blocks=1 00:12:58.851 00:12:58.851 ' 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:58.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:58.851 --rc genhtml_branch_coverage=1 00:12:58.851 --rc genhtml_function_coverage=1 00:12:58.851 --rc genhtml_legend=1 00:12:58.851 --rc geninfo_all_blocks=1 00:12:58.851 --rc geninfo_unexecuted_blocks=1 00:12:58.851 00:12:58.851 ' 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.851 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.852 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:58.852 15:49:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.422 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:13:05.423 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:13:05.423 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:13:05.423 Found net devices under 0000:d9:00.0: mlx_0_0 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:13:05.423 Found net devices under 0000:d9:00.1: mlx_0_1 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.423 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:05.424 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:05.424 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:13:05.424 altname enp217s0f0np0 00:13:05.424 altname ens818f0np0 00:13:05.424 inet 192.168.100.8/24 scope global mlx_0_0 00:13:05.424 valid_lft forever preferred_lft forever 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:05.424 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:05.424 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:13:05.424 altname enp217s0f1np1 00:13:05.424 altname ens818f1np1 00:13:05.424 inet 192.168.100.9/24 scope global mlx_0_1 00:13:05.424 valid_lft forever preferred_lft forever 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:13:05.424 192.168.100.9' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:13:05.424 192.168.100.9' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:13:05.424 192.168.100.9' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3251327 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3251327 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3251327 ']' 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:05.424 15:49:50 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:05.424 [2024-11-17 15:49:50.719631] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:05.424 [2024-11-17 15:49:50.719750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.424 [2024-11-17 15:49:50.855770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:05.424 [2024-11-17 15:49:50.958128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:05.424 [2024-11-17 15:49:50.958173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:05.424 [2024-11-17 15:49:50.958185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:05.424 [2024-11-17 15:49:50.958197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:05.424 [2024-11-17 15:49:50.958206] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:05.424 [2024-11-17 15:49:50.960764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:05.424 [2024-11-17 15:49:50.960781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:05.424 [2024-11-17 15:49:50.960855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.424 [2024-11-17 15:49:50.960863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.029 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:06.029 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:06.029 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:06.029 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:06.029 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 [2024-11-17 15:49:51.579807] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:13:06.340 [2024-11-17 15:49:51.604608] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fc62e701940) succeed. 00:13:06.340 [2024-11-17 15:49:51.614109] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fc62ddbd940) succeed. 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:06.340 [2024-11-17 15:49:51.865809] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:06.340 15:49:51 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:09.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.485 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.173 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.576 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.964 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.646 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:28.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:44.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:03.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:06.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:10.026 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:19.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:22.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:25.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:28.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:34.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.129 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:41.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:44.696 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:50.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:53.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:00.362 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.892 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:18.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:21.975 rmmod nvme_rdma 00:18:21.975 rmmod nvme_fabrics 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3251327 ']' 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3251327 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3251327 ']' 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3251327 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:21.975 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3251327 00:18:22.233 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:22.233 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:22.233 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3251327' 00:18:22.233 killing process with pid 3251327 00:18:22.233 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3251327 00:18:22.233 15:55:07 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3251327 00:18:23.610 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:23.610 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:23.610 00:18:23.610 real 5m24.970s 00:18:23.610 user 21m7.018s 00:18:23.610 sys 0m18.364s 00:18:23.610 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.610 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:18:23.610 ************************************ 00:18:23.610 END TEST nvmf_connect_disconnect 00:18:23.610 ************************************ 00:18:23.610 15:55:09 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:18:23.610 15:55:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:23.610 15:55:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.610 15:55:09 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:23.610 ************************************ 00:18:23.610 START TEST nvmf_multitarget 00:18:23.610 ************************************ 00:18:23.610 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:18:23.874 * Looking for test storage... 00:18:23.874 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:23.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.874 --rc genhtml_branch_coverage=1 00:18:23.874 --rc genhtml_function_coverage=1 00:18:23.874 --rc genhtml_legend=1 00:18:23.874 --rc geninfo_all_blocks=1 00:18:23.874 --rc geninfo_unexecuted_blocks=1 00:18:23.874 00:18:23.874 ' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:23.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.874 --rc genhtml_branch_coverage=1 00:18:23.874 --rc genhtml_function_coverage=1 00:18:23.874 --rc genhtml_legend=1 00:18:23.874 --rc geninfo_all_blocks=1 00:18:23.874 --rc geninfo_unexecuted_blocks=1 00:18:23.874 00:18:23.874 ' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:23.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.874 --rc genhtml_branch_coverage=1 00:18:23.874 --rc genhtml_function_coverage=1 00:18:23.874 --rc genhtml_legend=1 00:18:23.874 --rc geninfo_all_blocks=1 00:18:23.874 --rc geninfo_unexecuted_blocks=1 00:18:23.874 00:18:23.874 ' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:23.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.874 --rc genhtml_branch_coverage=1 00:18:23.874 --rc genhtml_function_coverage=1 00:18:23.874 --rc genhtml_legend=1 00:18:23.874 --rc geninfo_all_blocks=1 00:18:23.874 --rc geninfo_unexecuted_blocks=1 00:18:23.874 00:18:23.874 ' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:23.874 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:18:23.874 15:55:09 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:30.434 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.434 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:30.435 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:30.435 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:30.435 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:30.435 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # rdma_device_init 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:30.435 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:30.436 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:30.436 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:30.436 altname enp217s0f0np0 00:18:30.436 altname ens818f0np0 00:18:30.436 inet 192.168.100.8/24 scope global mlx_0_0 00:18:30.436 valid_lft forever preferred_lft forever 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:30.436 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:30.436 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:30.436 altname enp217s0f1np1 00:18:30.436 altname ens818f1np1 00:18:30.436 inet 192.168.100.9/24 scope global mlx_0_1 00:18:30.436 valid_lft forever preferred_lft forever 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:30.436 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:30.694 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:30.694 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:30.694 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.694 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:30.694 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:30.694 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:18:30.694 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:30.694 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.694 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:30.695 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:30.695 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:30.695 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:30.695 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:18:30.695 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:30.695 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:30.695 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:30.695 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:30.695 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:30.695 15:55:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:30.695 192.168.100.9' 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # head -n 1 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:30.695 192.168.100.9' 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:30.695 192.168.100.9' 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # tail -n +2 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # head -n 1 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3310703 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3310703 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3310703 ']' 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.695 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:30.695 [2024-11-17 15:55:16.141943] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:30.695 [2024-11-17 15:55:16.142038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.953 [2024-11-17 15:55:16.273990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:30.953 [2024-11-17 15:55:16.374860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.953 [2024-11-17 15:55:16.374907] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.954 [2024-11-17 15:55:16.374919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.954 [2024-11-17 15:55:16.374932] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.954 [2024-11-17 15:55:16.374942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.954 [2024-11-17 15:55:16.377469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.954 [2024-11-17 15:55:16.377544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:30.954 [2024-11-17 15:55:16.377620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.954 [2024-11-17 15:55:16.377623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.520 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.520 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:18:31.520 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.520 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.521 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:31.521 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.521 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:31.521 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:31.521 15:55:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:18:31.778 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:18:31.778 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:18:31.778 "nvmf_tgt_1" 00:18:31.778 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:18:31.778 "nvmf_tgt_2" 00:18:31.778 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:31.778 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:18:32.036 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:18:32.036 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:18:32.036 true 00:18:32.036 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:18:32.294 true 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:32.294 rmmod nvme_rdma 00:18:32.294 rmmod nvme_fabrics 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3310703 ']' 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3310703 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3310703 ']' 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3310703 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.294 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3310703 00:18:32.552 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.552 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.552 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3310703' 00:18:32.552 killing process with pid 3310703 00:18:32.552 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3310703 00:18:32.552 15:55:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3310703 00:18:33.488 15:55:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:33.488 15:55:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:18:33.488 00:18:33.488 real 0m9.802s 00:18:33.488 user 0m12.517s 00:18:33.488 sys 0m5.706s 00:18:33.488 15:55:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.488 15:55:18 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:18:33.488 ************************************ 00:18:33.488 END TEST nvmf_multitarget 00:18:33.488 ************************************ 00:18:33.488 15:55:18 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:18:33.488 15:55:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:33.488 15:55:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:33.488 15:55:18 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:33.488 ************************************ 00:18:33.488 START TEST nvmf_rpc 00:18:33.488 ************************************ 00:18:33.488 15:55:18 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:18:33.747 * Looking for test storage... 00:18:33.747 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:18:33.747 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.748 --rc genhtml_branch_coverage=1 00:18:33.748 --rc genhtml_function_coverage=1 00:18:33.748 --rc genhtml_legend=1 00:18:33.748 --rc geninfo_all_blocks=1 00:18:33.748 --rc geninfo_unexecuted_blocks=1 00:18:33.748 00:18:33.748 ' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.748 --rc genhtml_branch_coverage=1 00:18:33.748 --rc genhtml_function_coverage=1 00:18:33.748 --rc genhtml_legend=1 00:18:33.748 --rc geninfo_all_blocks=1 00:18:33.748 --rc geninfo_unexecuted_blocks=1 00:18:33.748 00:18:33.748 ' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.748 --rc genhtml_branch_coverage=1 00:18:33.748 --rc genhtml_function_coverage=1 00:18:33.748 --rc genhtml_legend=1 00:18:33.748 --rc geninfo_all_blocks=1 00:18:33.748 --rc geninfo_unexecuted_blocks=1 00:18:33.748 00:18:33.748 ' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:33.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:33.748 --rc genhtml_branch_coverage=1 00:18:33.748 --rc genhtml_function_coverage=1 00:18:33.748 --rc genhtml_legend=1 00:18:33.748 --rc geninfo_all_blocks=1 00:18:33.748 --rc geninfo_unexecuted_blocks=1 00:18:33.748 00:18:33.748 ' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:33.748 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:18:33.748 15:55:19 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:40.309 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:40.309 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:40.309 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:40.310 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:40.310 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # rdma_device_init 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@530 -- # allocate_nic_ips 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:40.310 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:40.569 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:40.569 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:40.569 altname enp217s0f0np0 00:18:40.569 altname ens818f0np0 00:18:40.569 inet 192.168.100.8/24 scope global mlx_0_0 00:18:40.569 valid_lft forever preferred_lft forever 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:40.569 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:40.569 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:40.569 altname enp217s0f1np1 00:18:40.569 altname ens818f1np1 00:18:40.569 inet 192.168.100.9/24 scope global mlx_0_1 00:18:40.569 valid_lft forever preferred_lft forever 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:40.569 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:18:40.570 192.168.100.9' 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:18:40.570 192.168.100.9' 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # head -n 1 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:18:40.570 192.168.100.9' 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # tail -n +2 00:18:40.570 15:55:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # head -n 1 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3314567 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3314567 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3314567 ']' 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.570 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:40.828 [2024-11-17 15:55:26.137445] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:40.828 [2024-11-17 15:55:26.137573] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.828 [2024-11-17 15:55:26.273496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:41.087 [2024-11-17 15:55:26.372829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.087 [2024-11-17 15:55:26.372876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.087 [2024-11-17 15:55:26.372890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.087 [2024-11-17 15:55:26.372903] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.087 [2024-11-17 15:55:26.372914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.087 [2024-11-17 15:55:26.375529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.087 [2024-11-17 15:55:26.375609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.087 [2024-11-17 15:55:26.375643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.087 [2024-11-17 15:55:26.375651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:41.653 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.653 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:41.653 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.653 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.653 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.653 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.653 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:18:41.653 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.653 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.653 15:55:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:18:41.653 "tick_rate": 2500000000, 00:18:41.653 "poll_groups": [ 00:18:41.653 { 00:18:41.653 "name": "nvmf_tgt_poll_group_000", 00:18:41.653 "admin_qpairs": 0, 00:18:41.653 "io_qpairs": 0, 00:18:41.653 "current_admin_qpairs": 0, 00:18:41.653 "current_io_qpairs": 0, 00:18:41.653 "pending_bdev_io": 0, 00:18:41.653 "completed_nvme_io": 0, 00:18:41.653 "transports": [] 00:18:41.653 }, 00:18:41.653 { 00:18:41.653 "name": "nvmf_tgt_poll_group_001", 00:18:41.653 "admin_qpairs": 0, 00:18:41.653 "io_qpairs": 0, 00:18:41.653 "current_admin_qpairs": 0, 00:18:41.653 "current_io_qpairs": 0, 00:18:41.653 "pending_bdev_io": 0, 00:18:41.653 "completed_nvme_io": 0, 00:18:41.653 "transports": [] 00:18:41.653 }, 00:18:41.653 { 00:18:41.653 "name": "nvmf_tgt_poll_group_002", 00:18:41.653 "admin_qpairs": 0, 00:18:41.653 "io_qpairs": 0, 00:18:41.653 "current_admin_qpairs": 0, 00:18:41.653 "current_io_qpairs": 0, 00:18:41.653 "pending_bdev_io": 0, 00:18:41.653 "completed_nvme_io": 0, 00:18:41.653 "transports": [] 00:18:41.653 }, 00:18:41.653 { 00:18:41.653 "name": "nvmf_tgt_poll_group_003", 00:18:41.653 "admin_qpairs": 0, 00:18:41.653 "io_qpairs": 0, 00:18:41.653 "current_admin_qpairs": 0, 00:18:41.653 "current_io_qpairs": 0, 00:18:41.653 "pending_bdev_io": 0, 00:18:41.653 "completed_nvme_io": 0, 00:18:41.653 "transports": [] 00:18:41.653 } 00:18:41.653 ] 00:18:41.653 }' 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.653 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:41.653 [2024-11-17 15:55:27.122018] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7fc62c9a6940) succeed. 00:18:41.653 [2024-11-17 15:55:27.131493] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7fc62c962940) succeed. 00:18:41.911 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.911 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:18:41.911 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.911 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.170 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.170 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:18:42.170 "tick_rate": 2500000000, 00:18:42.170 "poll_groups": [ 00:18:42.170 { 00:18:42.170 "name": "nvmf_tgt_poll_group_000", 00:18:42.170 "admin_qpairs": 0, 00:18:42.170 "io_qpairs": 0, 00:18:42.170 "current_admin_qpairs": 0, 00:18:42.170 "current_io_qpairs": 0, 00:18:42.170 "pending_bdev_io": 0, 00:18:42.170 "completed_nvme_io": 0, 00:18:42.170 "transports": [ 00:18:42.170 { 00:18:42.170 "trtype": "RDMA", 00:18:42.170 "pending_data_buffer": 0, 00:18:42.170 "devices": [ 00:18:42.170 { 00:18:42.170 "name": "mlx5_0", 00:18:42.170 "polls": 31207, 00:18:42.170 "idle_polls": 31207, 00:18:42.170 "completions": 0, 00:18:42.170 "requests": 0, 00:18:42.170 "request_latency": 0, 00:18:42.170 "pending_free_request": 0, 00:18:42.170 "pending_rdma_read": 0, 00:18:42.170 "pending_rdma_write": 0, 00:18:42.170 "pending_rdma_send": 0, 00:18:42.170 "total_send_wrs": 0, 00:18:42.170 "send_doorbell_updates": 0, 00:18:42.170 "total_recv_wrs": 4096, 00:18:42.170 "recv_doorbell_updates": 1 00:18:42.170 }, 00:18:42.170 { 00:18:42.170 "name": "mlx5_1", 00:18:42.170 "polls": 31207, 00:18:42.170 "idle_polls": 31207, 00:18:42.170 "completions": 0, 00:18:42.170 "requests": 0, 00:18:42.170 "request_latency": 0, 00:18:42.170 "pending_free_request": 0, 00:18:42.170 "pending_rdma_read": 0, 00:18:42.170 "pending_rdma_write": 0, 00:18:42.170 "pending_rdma_send": 0, 00:18:42.170 "total_send_wrs": 0, 00:18:42.170 "send_doorbell_updates": 0, 00:18:42.170 "total_recv_wrs": 4096, 00:18:42.170 "recv_doorbell_updates": 1 00:18:42.170 } 00:18:42.170 ] 00:18:42.170 } 00:18:42.170 ] 00:18:42.170 }, 00:18:42.170 { 00:18:42.170 "name": "nvmf_tgt_poll_group_001", 00:18:42.170 "admin_qpairs": 0, 00:18:42.170 "io_qpairs": 0, 00:18:42.170 "current_admin_qpairs": 0, 00:18:42.170 "current_io_qpairs": 0, 00:18:42.170 "pending_bdev_io": 0, 00:18:42.170 "completed_nvme_io": 0, 00:18:42.170 "transports": [ 00:18:42.170 { 00:18:42.170 "trtype": "RDMA", 00:18:42.170 "pending_data_buffer": 0, 00:18:42.170 "devices": [ 00:18:42.170 { 00:18:42.170 "name": "mlx5_0", 00:18:42.170 "polls": 19952, 00:18:42.170 "idle_polls": 19952, 00:18:42.170 "completions": 0, 00:18:42.170 "requests": 0, 00:18:42.170 "request_latency": 0, 00:18:42.170 "pending_free_request": 0, 00:18:42.170 "pending_rdma_read": 0, 00:18:42.170 "pending_rdma_write": 0, 00:18:42.170 "pending_rdma_send": 0, 00:18:42.170 "total_send_wrs": 0, 00:18:42.170 "send_doorbell_updates": 0, 00:18:42.170 "total_recv_wrs": 4096, 00:18:42.170 "recv_doorbell_updates": 1 00:18:42.170 }, 00:18:42.170 { 00:18:42.170 "name": "mlx5_1", 00:18:42.170 "polls": 19952, 00:18:42.170 "idle_polls": 19952, 00:18:42.170 "completions": 0, 00:18:42.170 "requests": 0, 00:18:42.170 "request_latency": 0, 00:18:42.170 "pending_free_request": 0, 00:18:42.170 "pending_rdma_read": 0, 00:18:42.170 "pending_rdma_write": 0, 00:18:42.170 "pending_rdma_send": 0, 00:18:42.170 "total_send_wrs": 0, 00:18:42.170 "send_doorbell_updates": 0, 00:18:42.170 "total_recv_wrs": 4096, 00:18:42.170 "recv_doorbell_updates": 1 00:18:42.170 } 00:18:42.170 ] 00:18:42.170 } 00:18:42.170 ] 00:18:42.170 }, 00:18:42.170 { 00:18:42.170 "name": "nvmf_tgt_poll_group_002", 00:18:42.170 "admin_qpairs": 0, 00:18:42.170 "io_qpairs": 0, 00:18:42.170 "current_admin_qpairs": 0, 00:18:42.170 "current_io_qpairs": 0, 00:18:42.170 "pending_bdev_io": 0, 00:18:42.170 "completed_nvme_io": 0, 00:18:42.170 "transports": [ 00:18:42.170 { 00:18:42.170 "trtype": "RDMA", 00:18:42.170 "pending_data_buffer": 0, 00:18:42.170 "devices": [ 00:18:42.170 { 00:18:42.170 "name": "mlx5_0", 00:18:42.171 "polls": 9998, 00:18:42.171 "idle_polls": 9998, 00:18:42.171 "completions": 0, 00:18:42.171 "requests": 0, 00:18:42.171 "request_latency": 0, 00:18:42.171 "pending_free_request": 0, 00:18:42.171 "pending_rdma_read": 0, 00:18:42.171 "pending_rdma_write": 0, 00:18:42.171 "pending_rdma_send": 0, 00:18:42.171 "total_send_wrs": 0, 00:18:42.171 "send_doorbell_updates": 0, 00:18:42.171 "total_recv_wrs": 4096, 00:18:42.171 "recv_doorbell_updates": 1 00:18:42.171 }, 00:18:42.171 { 00:18:42.171 "name": "mlx5_1", 00:18:42.171 "polls": 9998, 00:18:42.171 "idle_polls": 9998, 00:18:42.171 "completions": 0, 00:18:42.171 "requests": 0, 00:18:42.171 "request_latency": 0, 00:18:42.171 "pending_free_request": 0, 00:18:42.171 "pending_rdma_read": 0, 00:18:42.171 "pending_rdma_write": 0, 00:18:42.171 "pending_rdma_send": 0, 00:18:42.171 "total_send_wrs": 0, 00:18:42.171 "send_doorbell_updates": 0, 00:18:42.171 "total_recv_wrs": 4096, 00:18:42.171 "recv_doorbell_updates": 1 00:18:42.171 } 00:18:42.171 ] 00:18:42.171 } 00:18:42.171 ] 00:18:42.171 }, 00:18:42.171 { 00:18:42.171 "name": "nvmf_tgt_poll_group_003", 00:18:42.171 "admin_qpairs": 0, 00:18:42.171 "io_qpairs": 0, 00:18:42.171 "current_admin_qpairs": 0, 00:18:42.171 "current_io_qpairs": 0, 00:18:42.171 "pending_bdev_io": 0, 00:18:42.171 "completed_nvme_io": 0, 00:18:42.171 "transports": [ 00:18:42.171 { 00:18:42.171 "trtype": "RDMA", 00:18:42.171 "pending_data_buffer": 0, 00:18:42.171 "devices": [ 00:18:42.171 { 00:18:42.171 "name": "mlx5_0", 00:18:42.171 "polls": 806, 00:18:42.171 "idle_polls": 806, 00:18:42.171 "completions": 0, 00:18:42.171 "requests": 0, 00:18:42.171 "request_latency": 0, 00:18:42.171 "pending_free_request": 0, 00:18:42.171 "pending_rdma_read": 0, 00:18:42.171 "pending_rdma_write": 0, 00:18:42.171 "pending_rdma_send": 0, 00:18:42.171 "total_send_wrs": 0, 00:18:42.171 "send_doorbell_updates": 0, 00:18:42.171 "total_recv_wrs": 4096, 00:18:42.171 "recv_doorbell_updates": 1 00:18:42.171 }, 00:18:42.171 { 00:18:42.171 "name": "mlx5_1", 00:18:42.171 "polls": 806, 00:18:42.171 "idle_polls": 806, 00:18:42.171 "completions": 0, 00:18:42.171 "requests": 0, 00:18:42.171 "request_latency": 0, 00:18:42.171 "pending_free_request": 0, 00:18:42.171 "pending_rdma_read": 0, 00:18:42.171 "pending_rdma_write": 0, 00:18:42.171 "pending_rdma_send": 0, 00:18:42.171 "total_send_wrs": 0, 00:18:42.171 "send_doorbell_updates": 0, 00:18:42.171 "total_recv_wrs": 4096, 00:18:42.171 "recv_doorbell_updates": 1 00:18:42.171 } 00:18:42.171 ] 00:18:42.171 } 00:18:42.171 ] 00:18:42.171 } 00:18:42.171 ] 00:18:42.171 }' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.171 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.429 Malloc1 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.429 [2024-11-17 15:55:27.794162] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:42.429 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:18:42.430 [2024-11-17 15:55:27.840570] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:18:42.430 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:42.430 could not add new controller: failed to write to nvme-fabrics device 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.430 15:55:27 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:43.362 15:55:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:18:43.362 15:55:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:43.362 15:55:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.362 15:55:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:43.362 15:55:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:45.890 15:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:45.890 15:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:45.890 15:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:45.890 15:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:45.890 15:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.890 15:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:45.890 15:55:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:46.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:46.458 [2024-11-17 15:55:31.942719] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:18:46.458 Failed to write to /dev/nvme-fabrics: Input/output error 00:18:46.458 could not add new controller: failed to write to nvme-fabrics device 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.458 15:55:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:47.833 15:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:18:47.833 15:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:47.833 15:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.833 15:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:47.833 15:55:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:49.734 15:55:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:49.734 15:55:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:49.734 15:55:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:49.734 15:55:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:49.734 15:55:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.734 15:55:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:49.734 15:55:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.785 15:55:35 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.785 [2024-11-17 15:55:36.009752] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.785 15:55:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:51.720 15:55:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:51.720 15:55:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:51.720 15:55:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.720 15:55:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:51.720 15:55:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:53.623 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:53.623 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:53.623 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:53.623 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:53.623 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.623 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:53.623 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:54.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:54.557 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:54.557 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:54.557 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:54.557 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.557 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:54.557 15:55:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.557 [2024-11-17 15:55:40.048498] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.557 15:55:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:55.932 15:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:55.932 15:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:55.932 15:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.932 15:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:55.932 15:55:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:57.830 15:55:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:57.830 15:55:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:57.830 15:55:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:57.830 15:55:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:57.830 15:55:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.830 15:55:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:57.830 15:55:43 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:58.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.763 [2024-11-17 15:55:44.097662] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.763 15:55:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:59.696 15:55:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:59.696 15:55:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:59.696 15:55:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:59.696 15:55:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:59.696 15:55:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:01.596 15:55:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:01.596 15:55:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:01.596 15:55:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:01.596 15:55:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:01.596 15:55:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.596 15:55:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:01.596 15:55:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:02.967 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:02.967 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:02.967 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:02.967 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:02.967 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.968 [2024-11-17 15:55:48.171507] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.968 15:55:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:03.897 15:55:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:03.897 15:55:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:03.897 15:55:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.897 15:55:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:03.897 15:55:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:05.793 15:55:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:05.793 15:55:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:05.793 15:55:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:05.793 15:55:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:05.793 15:55:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.793 15:55:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:05.793 15:55:51 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:06.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.725 [2024-11-17 15:55:52.228589] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.725 15:55:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:08.096 15:55:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:19:08.096 15:55:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:19:08.096 15:55:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:08.096 15:55:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:08.096 15:55:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:19:09.991 15:55:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:09.991 15:55:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:09.991 15:55:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:09.991 15:55:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:09.991 15:55:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.991 15:55:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:19:09.991 15:55:55 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:10.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.922 [2024-11-17 15:55:56.297531] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.922 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 [2024-11-17 15:55:56.349669] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 [2024-11-17 15:55:56.401921] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:10.923 [2024-11-17 15:55:56.454105] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.923 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 [2024-11-17 15:55:56.506313] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.181 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:19:11.182 "tick_rate": 2500000000, 00:19:11.182 "poll_groups": [ 00:19:11.182 { 00:19:11.182 "name": "nvmf_tgt_poll_group_000", 00:19:11.182 "admin_qpairs": 2, 00:19:11.182 "io_qpairs": 27, 00:19:11.182 "current_admin_qpairs": 0, 00:19:11.182 "current_io_qpairs": 0, 00:19:11.182 "pending_bdev_io": 0, 00:19:11.182 "completed_nvme_io": 127, 00:19:11.182 "transports": [ 00:19:11.182 { 00:19:11.182 "trtype": "RDMA", 00:19:11.182 "pending_data_buffer": 0, 00:19:11.182 "devices": [ 00:19:11.182 { 00:19:11.182 "name": "mlx5_0", 00:19:11.182 "polls": 3387398, 00:19:11.182 "idle_polls": 3387074, 00:19:11.182 "completions": 365, 00:19:11.182 "requests": 182, 00:19:11.182 "request_latency": 46808572, 00:19:11.182 "pending_free_request": 0, 00:19:11.182 "pending_rdma_read": 0, 00:19:11.182 "pending_rdma_write": 0, 00:19:11.182 "pending_rdma_send": 0, 00:19:11.182 "total_send_wrs": 308, 00:19:11.182 "send_doorbell_updates": 159, 00:19:11.182 "total_recv_wrs": 4278, 00:19:11.182 "recv_doorbell_updates": 159 00:19:11.182 }, 00:19:11.182 { 00:19:11.182 "name": "mlx5_1", 00:19:11.182 "polls": 3387398, 00:19:11.182 "idle_polls": 3387398, 00:19:11.182 "completions": 0, 00:19:11.182 "requests": 0, 00:19:11.182 "request_latency": 0, 00:19:11.182 "pending_free_request": 0, 00:19:11.182 "pending_rdma_read": 0, 00:19:11.182 "pending_rdma_write": 0, 00:19:11.182 "pending_rdma_send": 0, 00:19:11.182 "total_send_wrs": 0, 00:19:11.182 "send_doorbell_updates": 0, 00:19:11.182 "total_recv_wrs": 4096, 00:19:11.182 "recv_doorbell_updates": 1 00:19:11.182 } 00:19:11.182 ] 00:19:11.182 } 00:19:11.182 ] 00:19:11.182 }, 00:19:11.182 { 00:19:11.182 "name": "nvmf_tgt_poll_group_001", 00:19:11.182 "admin_qpairs": 2, 00:19:11.182 "io_qpairs": 26, 00:19:11.182 "current_admin_qpairs": 0, 00:19:11.182 "current_io_qpairs": 0, 00:19:11.182 "pending_bdev_io": 0, 00:19:11.182 "completed_nvme_io": 119, 00:19:11.182 "transports": [ 00:19:11.182 { 00:19:11.182 "trtype": "RDMA", 00:19:11.182 "pending_data_buffer": 0, 00:19:11.182 "devices": [ 00:19:11.182 { 00:19:11.182 "name": "mlx5_0", 00:19:11.182 "polls": 3322399, 00:19:11.182 "idle_polls": 3322090, 00:19:11.182 "completions": 344, 00:19:11.182 "requests": 172, 00:19:11.182 "request_latency": 42991960, 00:19:11.182 "pending_free_request": 0, 00:19:11.182 "pending_rdma_read": 0, 00:19:11.182 "pending_rdma_write": 0, 00:19:11.182 "pending_rdma_send": 0, 00:19:11.182 "total_send_wrs": 289, 00:19:11.182 "send_doorbell_updates": 151, 00:19:11.182 "total_recv_wrs": 4268, 00:19:11.182 "recv_doorbell_updates": 152 00:19:11.182 }, 00:19:11.182 { 00:19:11.182 "name": "mlx5_1", 00:19:11.182 "polls": 3322399, 00:19:11.182 "idle_polls": 3322399, 00:19:11.182 "completions": 0, 00:19:11.182 "requests": 0, 00:19:11.182 "request_latency": 0, 00:19:11.182 "pending_free_request": 0, 00:19:11.182 "pending_rdma_read": 0, 00:19:11.182 "pending_rdma_write": 0, 00:19:11.182 "pending_rdma_send": 0, 00:19:11.182 "total_send_wrs": 0, 00:19:11.182 "send_doorbell_updates": 0, 00:19:11.182 "total_recv_wrs": 4096, 00:19:11.182 "recv_doorbell_updates": 1 00:19:11.182 } 00:19:11.182 ] 00:19:11.182 } 00:19:11.182 ] 00:19:11.182 }, 00:19:11.182 { 00:19:11.182 "name": "nvmf_tgt_poll_group_002", 00:19:11.182 "admin_qpairs": 1, 00:19:11.182 "io_qpairs": 26, 00:19:11.182 "current_admin_qpairs": 0, 00:19:11.182 "current_io_qpairs": 0, 00:19:11.182 "pending_bdev_io": 0, 00:19:11.182 "completed_nvme_io": 34, 00:19:11.182 "transports": [ 00:19:11.182 { 00:19:11.182 "trtype": "RDMA", 00:19:11.182 "pending_data_buffer": 0, 00:19:11.182 "devices": [ 00:19:11.182 { 00:19:11.182 "name": "mlx5_0", 00:19:11.182 "polls": 3309914, 00:19:11.182 "idle_polls": 3309794, 00:19:11.182 "completions": 125, 00:19:11.182 "requests": 62, 00:19:11.182 "request_latency": 12141610, 00:19:11.182 "pending_free_request": 0, 00:19:11.182 "pending_rdma_read": 0, 00:19:11.182 "pending_rdma_write": 0, 00:19:11.182 "pending_rdma_send": 0, 00:19:11.182 "total_send_wrs": 84, 00:19:11.182 "send_doorbell_updates": 61, 00:19:11.182 "total_recv_wrs": 4158, 00:19:11.182 "recv_doorbell_updates": 61 00:19:11.182 }, 00:19:11.182 { 00:19:11.182 "name": "mlx5_1", 00:19:11.182 "polls": 3309914, 00:19:11.182 "idle_polls": 3309914, 00:19:11.182 "completions": 0, 00:19:11.182 "requests": 0, 00:19:11.182 "request_latency": 0, 00:19:11.182 "pending_free_request": 0, 00:19:11.182 "pending_rdma_read": 0, 00:19:11.182 "pending_rdma_write": 0, 00:19:11.182 "pending_rdma_send": 0, 00:19:11.182 "total_send_wrs": 0, 00:19:11.182 "send_doorbell_updates": 0, 00:19:11.182 "total_recv_wrs": 4096, 00:19:11.182 "recv_doorbell_updates": 1 00:19:11.182 } 00:19:11.182 ] 00:19:11.182 } 00:19:11.182 ] 00:19:11.182 }, 00:19:11.182 { 00:19:11.182 "name": "nvmf_tgt_poll_group_003", 00:19:11.182 "admin_qpairs": 2, 00:19:11.182 "io_qpairs": 26, 00:19:11.182 "current_admin_qpairs": 0, 00:19:11.182 "current_io_qpairs": 0, 00:19:11.182 "pending_bdev_io": 0, 00:19:11.182 "completed_nvme_io": 175, 00:19:11.182 "transports": [ 00:19:11.182 { 00:19:11.182 "trtype": "RDMA", 00:19:11.182 "pending_data_buffer": 0, 00:19:11.182 "devices": [ 00:19:11.182 { 00:19:11.182 "name": "mlx5_0", 00:19:11.182 "polls": 2581095, 00:19:11.182 "idle_polls": 2580702, 00:19:11.182 "completions": 458, 00:19:11.182 "requests": 229, 00:19:11.182 "request_latency": 64000666, 00:19:11.182 "pending_free_request": 0, 00:19:11.182 "pending_rdma_read": 0, 00:19:11.182 "pending_rdma_write": 0, 00:19:11.182 "pending_rdma_send": 0, 00:19:11.182 "total_send_wrs": 403, 00:19:11.182 "send_doorbell_updates": 190, 00:19:11.182 "total_recv_wrs": 4325, 00:19:11.182 "recv_doorbell_updates": 191 00:19:11.182 }, 00:19:11.182 { 00:19:11.182 "name": "mlx5_1", 00:19:11.182 "polls": 2581095, 00:19:11.182 "idle_polls": 2581095, 00:19:11.182 "completions": 0, 00:19:11.182 "requests": 0, 00:19:11.182 "request_latency": 0, 00:19:11.182 "pending_free_request": 0, 00:19:11.182 "pending_rdma_read": 0, 00:19:11.182 "pending_rdma_write": 0, 00:19:11.182 "pending_rdma_send": 0, 00:19:11.182 "total_send_wrs": 0, 00:19:11.182 "send_doorbell_updates": 0, 00:19:11.182 "total_recv_wrs": 4096, 00:19:11.182 "recv_doorbell_updates": 1 00:19:11.182 } 00:19:11.182 ] 00:19:11.182 } 00:19:11.182 ] 00:19:11.182 } 00:19:11.182 ] 00:19:11.182 }' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:19:11.182 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1292 > 0 )) 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 165942808 > 0 )) 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:11.441 rmmod nvme_rdma 00:19:11.441 rmmod nvme_fabrics 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3314567 ']' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3314567 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3314567 ']' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3314567 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3314567 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3314567' 00:19:11.441 killing process with pid 3314567 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3314567 00:19:11.441 15:55:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3314567 00:19:13.355 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:13.355 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:13.355 00:19:13.355 real 0m39.720s 00:19:13.355 user 2m9.277s 00:19:13.355 sys 0m7.286s 00:19:13.355 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.355 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:13.355 ************************************ 00:19:13.355 END TEST nvmf_rpc 00:19:13.355 ************************************ 00:19:13.355 15:55:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:19:13.355 15:55:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.355 15:55:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.355 15:55:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.355 ************************************ 00:19:13.355 START TEST nvmf_invalid 00:19:13.355 ************************************ 00:19:13.355 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:19:13.355 * Looking for test storage... 00:19:13.614 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:13.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.614 --rc genhtml_branch_coverage=1 00:19:13.614 --rc genhtml_function_coverage=1 00:19:13.614 --rc genhtml_legend=1 00:19:13.614 --rc geninfo_all_blocks=1 00:19:13.614 --rc geninfo_unexecuted_blocks=1 00:19:13.614 00:19:13.614 ' 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:13.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.614 --rc genhtml_branch_coverage=1 00:19:13.614 --rc genhtml_function_coverage=1 00:19:13.614 --rc genhtml_legend=1 00:19:13.614 --rc geninfo_all_blocks=1 00:19:13.614 --rc geninfo_unexecuted_blocks=1 00:19:13.614 00:19:13.614 ' 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:13.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.614 --rc genhtml_branch_coverage=1 00:19:13.614 --rc genhtml_function_coverage=1 00:19:13.614 --rc genhtml_legend=1 00:19:13.614 --rc geninfo_all_blocks=1 00:19:13.614 --rc geninfo_unexecuted_blocks=1 00:19:13.614 00:19:13.614 ' 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:13.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.614 --rc genhtml_branch_coverage=1 00:19:13.614 --rc genhtml_function_coverage=1 00:19:13.614 --rc genhtml_legend=1 00:19:13.614 --rc geninfo_all_blocks=1 00:19:13.614 --rc geninfo_unexecuted_blocks=1 00:19:13.614 00:19:13.614 ' 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.614 15:55:58 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.614 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:13.614 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:13.614 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.614 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.614 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.614 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.615 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:19:13.615 15:55:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:20.174 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:20.174 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:20.174 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:20.174 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:20.174 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # rdma_device_init 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:20.175 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:20.175 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:20.175 altname enp217s0f0np0 00:19:20.175 altname ens818f0np0 00:19:20.175 inet 192.168.100.8/24 scope global mlx_0_0 00:19:20.175 valid_lft forever preferred_lft forever 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:20.175 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:20.175 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:20.175 altname enp217s0f1np1 00:19:20.175 altname ens818f1np1 00:19:20.175 inet 192.168.100.9/24 scope global mlx_0_1 00:19:20.175 valid_lft forever preferred_lft forever 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:20.175 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:20.176 192.168.100.9' 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:20.176 192.168.100.9' 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # head -n 1 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:20.176 192.168.100.9' 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # tail -n +2 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # head -n 1 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3323540 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3323540 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3323540 ']' 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.176 15:56:05 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:20.434 [2024-11-17 15:56:05.788474] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:20.434 [2024-11-17 15:56:05.788583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:20.434 [2024-11-17 15:56:05.923312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:20.691 [2024-11-17 15:56:06.022940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:20.691 [2024-11-17 15:56:06.022984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:20.691 [2024-11-17 15:56:06.022995] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:20.691 [2024-11-17 15:56:06.023008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:20.691 [2024-11-17 15:56:06.023017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:20.691 [2024-11-17 15:56:06.025635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.691 [2024-11-17 15:56:06.025706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.691 [2024-11-17 15:56:06.025802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.691 [2024-11-17 15:56:06.025810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:21.254 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:21.254 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:19:21.254 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:21.254 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:21.254 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:21.254 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:21.254 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:21.254 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11665 00:19:21.511 [2024-11-17 15:56:06.803836] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:19:21.511 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:19:21.511 { 00:19:21.511 "nqn": "nqn.2016-06.io.spdk:cnode11665", 00:19:21.511 "tgt_name": "foobar", 00:19:21.511 "method": "nvmf_create_subsystem", 00:19:21.511 "req_id": 1 00:19:21.511 } 00:19:21.511 Got JSON-RPC error response 00:19:21.511 response: 00:19:21.511 { 00:19:21.511 "code": -32603, 00:19:21.511 "message": "Unable to find target foobar" 00:19:21.511 }' 00:19:21.511 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:19:21.511 { 00:19:21.511 "nqn": "nqn.2016-06.io.spdk:cnode11665", 00:19:21.511 "tgt_name": "foobar", 00:19:21.511 "method": "nvmf_create_subsystem", 00:19:21.511 "req_id": 1 00:19:21.511 } 00:19:21.511 Got JSON-RPC error response 00:19:21.511 response: 00:19:21.511 { 00:19:21.511 "code": -32603, 00:19:21.511 "message": "Unable to find target foobar" 00:19:21.511 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:19:21.511 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:19:21.511 15:56:06 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8176 00:19:21.511 [2024-11-17 15:56:07.016612] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8176: invalid serial number 'SPDKISFASTANDAWESOME' 00:19:21.511 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:19:21.511 { 00:19:21.511 "nqn": "nqn.2016-06.io.spdk:cnode8176", 00:19:21.511 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:21.511 "method": "nvmf_create_subsystem", 00:19:21.511 "req_id": 1 00:19:21.511 } 00:19:21.511 Got JSON-RPC error response 00:19:21.511 response: 00:19:21.511 { 00:19:21.511 "code": -32602, 00:19:21.511 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:21.511 }' 00:19:21.511 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:19:21.511 { 00:19:21.511 "nqn": "nqn.2016-06.io.spdk:cnode8176", 00:19:21.511 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:19:21.511 "method": "nvmf_create_subsystem", 00:19:21.511 "req_id": 1 00:19:21.511 } 00:19:21.511 Got JSON-RPC error response 00:19:21.511 response: 00:19:21.511 { 00:19:21.511 "code": -32602, 00:19:21.511 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:19:21.511 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:19:21.511 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4646 00:19:21.768 [2024-11-17 15:56:07.229312] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4646: invalid model number 'SPDK_Controller' 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:19:21.768 { 00:19:21.768 "nqn": "nqn.2016-06.io.spdk:cnode4646", 00:19:21.768 "model_number": "SPDK_Controller\u001f", 00:19:21.768 "method": "nvmf_create_subsystem", 00:19:21.768 "req_id": 1 00:19:21.768 } 00:19:21.768 Got JSON-RPC error response 00:19:21.768 response: 00:19:21.768 { 00:19:21.768 "code": -32602, 00:19:21.768 "message": "Invalid MN SPDK_Controller\u001f" 00:19:21.768 }' 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:19:21.768 { 00:19:21.768 "nqn": "nqn.2016-06.io.spdk:cnode4646", 00:19:21.768 "model_number": "SPDK_Controller\u001f", 00:19:21.768 "method": "nvmf_create_subsystem", 00:19:21.768 "req_id": 1 00:19:21.768 } 00:19:21.768 Got JSON-RPC error response 00:19:21.768 response: 00:19:21.768 { 00:19:21.768 "code": -32602, 00:19:21.768 "message": "Invalid MN SPDK_Controller\u001f" 00:19:21.768 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.768 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:21.769 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.026 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.027 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:19:22.027 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:19:22.027 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:19:22.027 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.027 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.027 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ L == \- ]] 00:19:22.027 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'L&u2\%AA.x1k#nWc3`p' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.285 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.286 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:19:22.286 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:19:22.286 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:19:22.286 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.286 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.286 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:19:22.543 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'gKok>kcdg~bAIY57/KHL9'\''MJP>!kxX ['\''Zzoevi8' 00:19:22.544 15:56:07 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'gKok>kcdg~bAIY57/KHL9'\''MJP>!kxX ['\''Zzoevi8' nqn.2016-06.io.spdk:cnode3475 00:19:22.801 [2024-11-17 15:56:08.132455] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3475: invalid model number 'gKok>kcdg~bAIY57/KHL9'MJP>!kxX ['Zzoevi8' 00:19:22.801 15:56:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:19:22.801 { 00:19:22.801 "nqn": "nqn.2016-06.io.spdk:cnode3475", 00:19:22.801 "model_number": "gKok>kcdg~bAIY57/KHL9'\''MJP>!k\u007fxX ['\''Zzoevi8", 00:19:22.801 "method": "nvmf_create_subsystem", 00:19:22.801 "req_id": 1 00:19:22.801 } 00:19:22.801 Got JSON-RPC error response 00:19:22.801 response: 00:19:22.801 { 00:19:22.801 "code": -32602, 00:19:22.801 "message": "Invalid MN gKok>kcdg~bAIY57/KHL9'\''MJP>!k\u007fxX ['\''Zzoevi8" 00:19:22.801 }' 00:19:22.801 15:56:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:19:22.801 { 00:19:22.801 "nqn": "nqn.2016-06.io.spdk:cnode3475", 00:19:22.801 "model_number": "gKok>kcdg~bAIY57/KHL9'MJP>!k\u007fxX ['Zzoevi8", 00:19:22.801 "method": "nvmf_create_subsystem", 00:19:22.801 "req_id": 1 00:19:22.801 } 00:19:22.801 Got JSON-RPC error response 00:19:22.801 response: 00:19:22.801 { 00:19:22.801 "code": -32602, 00:19:22.801 "message": "Invalid MN gKok>kcdg~bAIY57/KHL9'MJP>!k\u007fxX ['Zzoevi8" 00:19:22.801 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:19:22.801 15:56:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:19:23.058 [2024-11-17 15:56:08.355230] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f4eabb6a940) succeed. 00:19:23.058 [2024-11-17 15:56:08.364554] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f4eabb26940) succeed. 00:19:23.315 15:56:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:19:23.572 15:56:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:19:23.572 15:56:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:19:23.572 15:56:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:19:23.572 192.168.100.9' 00:19:23.572 15:56:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:19:23.572 15:56:08 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:19:23.572 [2024-11-17 15:56:09.052368] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:19:23.572 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:19:23.572 { 00:19:23.572 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:23.572 "listen_address": { 00:19:23.572 "trtype": "rdma", 00:19:23.572 "traddr": "192.168.100.8", 00:19:23.572 "trsvcid": "4421" 00:19:23.572 }, 00:19:23.572 "method": "nvmf_subsystem_remove_listener", 00:19:23.572 "req_id": 1 00:19:23.572 } 00:19:23.572 Got JSON-RPC error response 00:19:23.572 response: 00:19:23.572 { 00:19:23.572 "code": -32602, 00:19:23.572 "message": "Invalid parameters" 00:19:23.572 }' 00:19:23.572 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:19:23.572 { 00:19:23.572 "nqn": "nqn.2016-06.io.spdk:cnode", 00:19:23.572 "listen_address": { 00:19:23.572 "trtype": "rdma", 00:19:23.572 "traddr": "192.168.100.8", 00:19:23.572 "trsvcid": "4421" 00:19:23.572 }, 00:19:23.572 "method": "nvmf_subsystem_remove_listener", 00:19:23.572 "req_id": 1 00:19:23.572 } 00:19:23.572 Got JSON-RPC error response 00:19:23.572 response: 00:19:23.572 { 00:19:23.572 "code": -32602, 00:19:23.572 "message": "Invalid parameters" 00:19:23.572 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:19:23.572 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31349 -i 0 00:19:23.834 [2024-11-17 15:56:09.253066] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31349: invalid cntlid range [0-65519] 00:19:23.834 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:19:23.834 { 00:19:23.834 "nqn": "nqn.2016-06.io.spdk:cnode31349", 00:19:23.834 "min_cntlid": 0, 00:19:23.834 "method": "nvmf_create_subsystem", 00:19:23.834 "req_id": 1 00:19:23.834 } 00:19:23.834 Got JSON-RPC error response 00:19:23.834 response: 00:19:23.834 { 00:19:23.834 "code": -32602, 00:19:23.834 "message": "Invalid cntlid range [0-65519]" 00:19:23.834 }' 00:19:23.834 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:19:23.834 { 00:19:23.834 "nqn": "nqn.2016-06.io.spdk:cnode31349", 00:19:23.834 "min_cntlid": 0, 00:19:23.834 "method": "nvmf_create_subsystem", 00:19:23.834 "req_id": 1 00:19:23.834 } 00:19:23.834 Got JSON-RPC error response 00:19:23.834 response: 00:19:23.834 { 00:19:23.834 "code": -32602, 00:19:23.834 "message": "Invalid cntlid range [0-65519]" 00:19:23.834 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:23.834 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22713 -i 65520 00:19:24.091 [2024-11-17 15:56:09.453862] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22713: invalid cntlid range [65520-65519] 00:19:24.091 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:19:24.091 { 00:19:24.091 "nqn": "nqn.2016-06.io.spdk:cnode22713", 00:19:24.091 "min_cntlid": 65520, 00:19:24.091 "method": "nvmf_create_subsystem", 00:19:24.091 "req_id": 1 00:19:24.091 } 00:19:24.091 Got JSON-RPC error response 00:19:24.091 response: 00:19:24.091 { 00:19:24.091 "code": -32602, 00:19:24.091 "message": "Invalid cntlid range [65520-65519]" 00:19:24.091 }' 00:19:24.091 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:19:24.091 { 00:19:24.091 "nqn": "nqn.2016-06.io.spdk:cnode22713", 00:19:24.091 "min_cntlid": 65520, 00:19:24.091 "method": "nvmf_create_subsystem", 00:19:24.091 "req_id": 1 00:19:24.091 } 00:19:24.091 Got JSON-RPC error response 00:19:24.091 response: 00:19:24.091 { 00:19:24.091 "code": -32602, 00:19:24.091 "message": "Invalid cntlid range [65520-65519]" 00:19:24.091 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:24.091 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7357 -I 0 00:19:24.348 [2024-11-17 15:56:09.662589] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7357: invalid cntlid range [1-0] 00:19:24.348 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:19:24.348 { 00:19:24.348 "nqn": "nqn.2016-06.io.spdk:cnode7357", 00:19:24.348 "max_cntlid": 0, 00:19:24.348 "method": "nvmf_create_subsystem", 00:19:24.348 "req_id": 1 00:19:24.348 } 00:19:24.348 Got JSON-RPC error response 00:19:24.348 response: 00:19:24.348 { 00:19:24.349 "code": -32602, 00:19:24.349 "message": "Invalid cntlid range [1-0]" 00:19:24.349 }' 00:19:24.349 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:19:24.349 { 00:19:24.349 "nqn": "nqn.2016-06.io.spdk:cnode7357", 00:19:24.349 "max_cntlid": 0, 00:19:24.349 "method": "nvmf_create_subsystem", 00:19:24.349 "req_id": 1 00:19:24.349 } 00:19:24.349 Got JSON-RPC error response 00:19:24.349 response: 00:19:24.349 { 00:19:24.349 "code": -32602, 00:19:24.349 "message": "Invalid cntlid range [1-0]" 00:19:24.349 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:24.349 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13603 -I 65520 00:19:24.349 [2024-11-17 15:56:09.851254] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13603: invalid cntlid range [1-65520] 00:19:24.349 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:19:24.349 { 00:19:24.349 "nqn": "nqn.2016-06.io.spdk:cnode13603", 00:19:24.349 "max_cntlid": 65520, 00:19:24.349 "method": "nvmf_create_subsystem", 00:19:24.349 "req_id": 1 00:19:24.349 } 00:19:24.349 Got JSON-RPC error response 00:19:24.349 response: 00:19:24.349 { 00:19:24.349 "code": -32602, 00:19:24.349 "message": "Invalid cntlid range [1-65520]" 00:19:24.349 }' 00:19:24.349 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:19:24.349 { 00:19:24.349 "nqn": "nqn.2016-06.io.spdk:cnode13603", 00:19:24.349 "max_cntlid": 65520, 00:19:24.349 "method": "nvmf_create_subsystem", 00:19:24.349 "req_id": 1 00:19:24.349 } 00:19:24.349 Got JSON-RPC error response 00:19:24.349 response: 00:19:24.349 { 00:19:24.349 "code": -32602, 00:19:24.349 "message": "Invalid cntlid range [1-65520]" 00:19:24.349 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:24.349 15:56:09 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15187 -i 6 -I 5 00:19:24.606 [2024-11-17 15:56:10.056125] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15187: invalid cntlid range [6-5] 00:19:24.606 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:19:24.606 { 00:19:24.606 "nqn": "nqn.2016-06.io.spdk:cnode15187", 00:19:24.607 "min_cntlid": 6, 00:19:24.607 "max_cntlid": 5, 00:19:24.607 "method": "nvmf_create_subsystem", 00:19:24.607 "req_id": 1 00:19:24.607 } 00:19:24.607 Got JSON-RPC error response 00:19:24.607 response: 00:19:24.607 { 00:19:24.607 "code": -32602, 00:19:24.607 "message": "Invalid cntlid range [6-5]" 00:19:24.607 }' 00:19:24.607 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:19:24.607 { 00:19:24.607 "nqn": "nqn.2016-06.io.spdk:cnode15187", 00:19:24.607 "min_cntlid": 6, 00:19:24.607 "max_cntlid": 5, 00:19:24.607 "method": "nvmf_create_subsystem", 00:19:24.607 "req_id": 1 00:19:24.607 } 00:19:24.607 Got JSON-RPC error response 00:19:24.607 response: 00:19:24.607 { 00:19:24.607 "code": -32602, 00:19:24.607 "message": "Invalid cntlid range [6-5]" 00:19:24.607 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:19:24.607 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:19:24.864 { 00:19:24.864 "name": "foobar", 00:19:24.864 "method": "nvmf_delete_target", 00:19:24.864 "req_id": 1 00:19:24.864 } 00:19:24.864 Got JSON-RPC error response 00:19:24.864 response: 00:19:24.864 { 00:19:24.864 "code": -32602, 00:19:24.864 "message": "The specified target doesn'\''t exist, cannot delete it." 00:19:24.864 }' 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:19:24.864 { 00:19:24.864 "name": "foobar", 00:19:24.864 "method": "nvmf_delete_target", 00:19:24.864 "req_id": 1 00:19:24.864 } 00:19:24.864 Got JSON-RPC error response 00:19:24.864 response: 00:19:24.864 { 00:19:24.864 "code": -32602, 00:19:24.864 "message": "The specified target doesn't exist, cannot delete it." 00:19:24.864 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:24.864 rmmod nvme_rdma 00:19:24.864 rmmod nvme_fabrics 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3323540 ']' 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3323540 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3323540 ']' 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3323540 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3323540 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.864 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3323540' 00:19:24.864 killing process with pid 3323540 00:19:24.865 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3323540 00:19:24.865 15:56:10 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3323540 00:19:26.763 15:56:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:26.763 15:56:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:26.763 00:19:26.763 real 0m13.180s 00:19:26.763 user 0m26.782s 00:19:26.763 sys 0m6.486s 00:19:26.763 15:56:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.763 15:56:11 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:19:26.763 ************************************ 00:19:26.763 END TEST nvmf_invalid 00:19:26.763 ************************************ 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:26.763 ************************************ 00:19:26.763 START TEST nvmf_connect_stress 00:19:26.763 ************************************ 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:19:26.763 * Looking for test storage... 00:19:26.763 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:26.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.763 --rc genhtml_branch_coverage=1 00:19:26.763 --rc genhtml_function_coverage=1 00:19:26.763 --rc genhtml_legend=1 00:19:26.763 --rc geninfo_all_blocks=1 00:19:26.763 --rc geninfo_unexecuted_blocks=1 00:19:26.763 00:19:26.763 ' 00:19:26.763 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:26.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.763 --rc genhtml_branch_coverage=1 00:19:26.763 --rc genhtml_function_coverage=1 00:19:26.763 --rc genhtml_legend=1 00:19:26.763 --rc geninfo_all_blocks=1 00:19:26.763 --rc geninfo_unexecuted_blocks=1 00:19:26.763 00:19:26.763 ' 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:26.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.764 --rc genhtml_branch_coverage=1 00:19:26.764 --rc genhtml_function_coverage=1 00:19:26.764 --rc genhtml_legend=1 00:19:26.764 --rc geninfo_all_blocks=1 00:19:26.764 --rc geninfo_unexecuted_blocks=1 00:19:26.764 00:19:26.764 ' 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:26.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:26.764 --rc genhtml_branch_coverage=1 00:19:26.764 --rc genhtml_function_coverage=1 00:19:26.764 --rc genhtml_legend=1 00:19:26.764 --rc geninfo_all_blocks=1 00:19:26.764 --rc geninfo_unexecuted_blocks=1 00:19:26.764 00:19:26.764 ' 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:26.764 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:19:26.764 15:56:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.373 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.373 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:19:33.373 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:33.373 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:33.373 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:33.373 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:33.373 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:33.373 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:19:33.373 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:33.373 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:33.374 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:33.374 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:33.374 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:33.374 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # rdma_device_init 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:33.374 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:33.374 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:33.375 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:33.375 altname enp217s0f0np0 00:19:33.375 altname ens818f0np0 00:19:33.375 inet 192.168.100.8/24 scope global mlx_0_0 00:19:33.375 valid_lft forever preferred_lft forever 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:33.375 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:33.375 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:33.375 altname enp217s0f1np1 00:19:33.375 altname ens818f1np1 00:19:33.375 inet 192.168.100.9/24 scope global mlx_0_1 00:19:33.375 valid_lft forever preferred_lft forever 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:33.375 192.168.100.9' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:33.375 192.168.100.9' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # head -n 1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:33.375 192.168.100.9' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # tail -n +2 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # head -n 1 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3328010 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3328010 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3328010 ']' 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.375 15:56:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:33.375 [2024-11-17 15:56:18.666843] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:33.375 [2024-11-17 15:56:18.666953] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:33.375 [2024-11-17 15:56:18.802376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:33.375 [2024-11-17 15:56:18.901675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:33.375 [2024-11-17 15:56:18.901722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:33.375 [2024-11-17 15:56:18.901735] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:33.375 [2024-11-17 15:56:18.901749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:33.375 [2024-11-17 15:56:18.901759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:33.375 [2024-11-17 15:56:18.904062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.375 [2024-11-17 15:56:18.904122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.375 [2024-11-17 15:56:18.904129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:33.940 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.940 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:19:33.940 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:33.940 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.940 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.198 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.198 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:34.198 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.198 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.198 [2024-11-17 15:56:19.521627] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f57e9f8b940) succeed. 00:19:34.198 [2024-11-17 15:56:19.530833] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f57e9f47940) succeed. 00:19:34.198 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.198 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:34.198 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.198 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.456 [2024-11-17 15:56:19.752916] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.456 NULL1 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3328151 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.456 15:56:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:34.714 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.714 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:34.714 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:34.714 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.714 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.276 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.276 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:35.276 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.276 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.276 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.533 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.533 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:35.533 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.533 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.533 15:56:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:35.791 15:56:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.791 15:56:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:35.791 15:56:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:35.791 15:56:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.791 15:56:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.356 15:56:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.356 15:56:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:36.356 15:56:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.356 15:56:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.356 15:56:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.614 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.614 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:36.614 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.614 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.614 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:36.871 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.871 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:36.871 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:36.871 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.871 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.436 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.436 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:37.436 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.436 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.436 15:56:22 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.694 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.694 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:37.694 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.694 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.694 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:37.951 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.951 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:37.951 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:37.951 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.951 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.516 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.516 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:38.516 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.516 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.516 15:56:23 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:38.773 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.773 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:38.773 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:38.773 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.773 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.339 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.339 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:39.339 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.339 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.339 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.597 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.597 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:39.597 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.597 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.597 15:56:24 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:39.854 15:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.854 15:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:39.854 15:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:39.854 15:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.854 15:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.420 15:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.420 15:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:40.420 15:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.420 15:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.420 15:56:25 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.678 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.678 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:40.678 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.678 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.678 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:40.936 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.936 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:40.936 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:40.936 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.936 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.501 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.501 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:41.501 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.501 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.501 15:56:26 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:41.758 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.758 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:41.758 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:41.758 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.758 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.016 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.016 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:42.016 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.016 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.016 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.580 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.580 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:42.580 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.580 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.580 15:56:27 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:42.838 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.838 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:42.838 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:42.838 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.838 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.096 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.096 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:43.096 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.096 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.096 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.661 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.661 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:43.661 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.661 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.661 15:56:28 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:43.918 15:56:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.918 15:56:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:43.918 15:56:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:43.918 15:56:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.918 15:56:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.176 15:56:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.176 15:56:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:44.176 15:56:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.176 15:56:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.176 15:56:29 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.741 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.741 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:44.741 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:19:44.741 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.741 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:44.741 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:19:44.999 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.999 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3328151 00:19:44.999 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3328151) - No such process 00:19:44.999 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3328151 00:19:44.999 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:19:44.999 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:44.999 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:19:44.999 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:44.999 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:19:44.999 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:45.000 rmmod nvme_rdma 00:19:45.000 rmmod nvme_fabrics 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3328010 ']' 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3328010 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3328010 ']' 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3328010 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3328010 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3328010' 00:19:45.000 killing process with pid 3328010 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3328010 00:19:45.000 15:56:30 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3328010 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:46.927 00:19:46.927 real 0m19.943s 00:19:46.927 user 0m44.059s 00:19:46.927 sys 0m9.184s 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:19:46.927 ************************************ 00:19:46.927 END TEST nvmf_connect_stress 00:19:46.927 ************************************ 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:46.927 ************************************ 00:19:46.927 START TEST nvmf_fused_ordering 00:19:46.927 ************************************ 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:19:46.927 * Looking for test storage... 00:19:46.927 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.927 --rc genhtml_branch_coverage=1 00:19:46.927 --rc genhtml_function_coverage=1 00:19:46.927 --rc genhtml_legend=1 00:19:46.927 --rc geninfo_all_blocks=1 00:19:46.927 --rc geninfo_unexecuted_blocks=1 00:19:46.927 00:19:46.927 ' 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.927 --rc genhtml_branch_coverage=1 00:19:46.927 --rc genhtml_function_coverage=1 00:19:46.927 --rc genhtml_legend=1 00:19:46.927 --rc geninfo_all_blocks=1 00:19:46.927 --rc geninfo_unexecuted_blocks=1 00:19:46.927 00:19:46.927 ' 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.927 --rc genhtml_branch_coverage=1 00:19:46.927 --rc genhtml_function_coverage=1 00:19:46.927 --rc genhtml_legend=1 00:19:46.927 --rc geninfo_all_blocks=1 00:19:46.927 --rc geninfo_unexecuted_blocks=1 00:19:46.927 00:19:46.927 ' 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:46.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.927 --rc genhtml_branch_coverage=1 00:19:46.927 --rc genhtml_function_coverage=1 00:19:46.927 --rc genhtml_legend=1 00:19:46.927 --rc geninfo_all_blocks=1 00:19:46.927 --rc geninfo_unexecuted_blocks=1 00:19:46.927 00:19:46.927 ' 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.927 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:46.928 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:19:46.928 15:56:32 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:55.032 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:55.032 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:55.032 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:55.032 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # rdma_device_init 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:55.032 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@530 -- # allocate_nic_ips 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:55.033 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.033 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:55.033 altname enp217s0f0np0 00:19:55.033 altname ens818f0np0 00:19:55.033 inet 192.168.100.8/24 scope global mlx_0_0 00:19:55.033 valid_lft forever preferred_lft forever 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:55.033 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.033 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:55.033 altname enp217s0f1np1 00:19:55.033 altname ens818f1np1 00:19:55.033 inet 192.168.100.9/24 scope global mlx_0_1 00:19:55.033 valid_lft forever preferred_lft forever 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:19:55.033 192.168.100.9' 00:19:55.033 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:19:55.033 192.168.100.9' 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # head -n 1 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:19:55.034 192.168.100.9' 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # tail -n +2 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # head -n 1 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3333610 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3333610 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3333610 ']' 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.034 15:56:39 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.034 [2024-11-17 15:56:39.491884] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:55.034 [2024-11-17 15:56:39.491980] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.034 [2024-11-17 15:56:39.622006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.034 [2024-11-17 15:56:39.719194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.034 [2024-11-17 15:56:39.719243] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.034 [2024-11-17 15:56:39.719256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.034 [2024-11-17 15:56:39.719268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.034 [2024-11-17 15:56:39.719278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.034 [2024-11-17 15:56:39.720614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.034 [2024-11-17 15:56:40.365468] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f36bd9bd940) succeed. 00:19:55.034 [2024-11-17 15:56:40.374715] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f36bd979940) succeed. 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.034 [2024-11-17 15:56:40.453170] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.034 NULL1 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.034 15:56:40 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:55.034 [2024-11-17 15:56:40.532392] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:55.034 [2024-11-17 15:56:40.532453] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3333844 ] 00:19:55.292 Attached to nqn.2016-06.io.spdk:cnode1 00:19:55.292 Namespace ID: 1 size: 1GB 00:19:55.292 fused_ordering(0) 00:19:55.292 fused_ordering(1) 00:19:55.292 fused_ordering(2) 00:19:55.292 fused_ordering(3) 00:19:55.292 fused_ordering(4) 00:19:55.292 fused_ordering(5) 00:19:55.292 fused_ordering(6) 00:19:55.292 fused_ordering(7) 00:19:55.292 fused_ordering(8) 00:19:55.292 fused_ordering(9) 00:19:55.292 fused_ordering(10) 00:19:55.292 fused_ordering(11) 00:19:55.292 fused_ordering(12) 00:19:55.292 fused_ordering(13) 00:19:55.292 fused_ordering(14) 00:19:55.292 fused_ordering(15) 00:19:55.292 fused_ordering(16) 00:19:55.292 fused_ordering(17) 00:19:55.292 fused_ordering(18) 00:19:55.292 fused_ordering(19) 00:19:55.292 fused_ordering(20) 00:19:55.292 fused_ordering(21) 00:19:55.292 fused_ordering(22) 00:19:55.292 fused_ordering(23) 00:19:55.292 fused_ordering(24) 00:19:55.292 fused_ordering(25) 00:19:55.292 fused_ordering(26) 00:19:55.292 fused_ordering(27) 00:19:55.292 fused_ordering(28) 00:19:55.292 fused_ordering(29) 00:19:55.292 fused_ordering(30) 00:19:55.292 fused_ordering(31) 00:19:55.292 fused_ordering(32) 00:19:55.292 fused_ordering(33) 00:19:55.292 fused_ordering(34) 00:19:55.292 fused_ordering(35) 00:19:55.292 fused_ordering(36) 00:19:55.292 fused_ordering(37) 00:19:55.292 fused_ordering(38) 00:19:55.292 fused_ordering(39) 00:19:55.292 fused_ordering(40) 00:19:55.292 fused_ordering(41) 00:19:55.292 fused_ordering(42) 00:19:55.292 fused_ordering(43) 00:19:55.292 fused_ordering(44) 00:19:55.292 fused_ordering(45) 00:19:55.292 fused_ordering(46) 00:19:55.292 fused_ordering(47) 00:19:55.292 fused_ordering(48) 00:19:55.292 fused_ordering(49) 00:19:55.292 fused_ordering(50) 00:19:55.292 fused_ordering(51) 00:19:55.292 fused_ordering(52) 00:19:55.292 fused_ordering(53) 00:19:55.292 fused_ordering(54) 00:19:55.292 fused_ordering(55) 00:19:55.292 fused_ordering(56) 00:19:55.292 fused_ordering(57) 00:19:55.292 fused_ordering(58) 00:19:55.292 fused_ordering(59) 00:19:55.292 fused_ordering(60) 00:19:55.292 fused_ordering(61) 00:19:55.292 fused_ordering(62) 00:19:55.292 fused_ordering(63) 00:19:55.292 fused_ordering(64) 00:19:55.292 fused_ordering(65) 00:19:55.292 fused_ordering(66) 00:19:55.292 fused_ordering(67) 00:19:55.292 fused_ordering(68) 00:19:55.292 fused_ordering(69) 00:19:55.292 fused_ordering(70) 00:19:55.292 fused_ordering(71) 00:19:55.292 fused_ordering(72) 00:19:55.292 fused_ordering(73) 00:19:55.292 fused_ordering(74) 00:19:55.292 fused_ordering(75) 00:19:55.292 fused_ordering(76) 00:19:55.292 fused_ordering(77) 00:19:55.292 fused_ordering(78) 00:19:55.292 fused_ordering(79) 00:19:55.292 fused_ordering(80) 00:19:55.292 fused_ordering(81) 00:19:55.292 fused_ordering(82) 00:19:55.292 fused_ordering(83) 00:19:55.292 fused_ordering(84) 00:19:55.292 fused_ordering(85) 00:19:55.292 fused_ordering(86) 00:19:55.292 fused_ordering(87) 00:19:55.292 fused_ordering(88) 00:19:55.292 fused_ordering(89) 00:19:55.292 fused_ordering(90) 00:19:55.292 fused_ordering(91) 00:19:55.292 fused_ordering(92) 00:19:55.292 fused_ordering(93) 00:19:55.292 fused_ordering(94) 00:19:55.292 fused_ordering(95) 00:19:55.292 fused_ordering(96) 00:19:55.292 fused_ordering(97) 00:19:55.292 fused_ordering(98) 00:19:55.292 fused_ordering(99) 00:19:55.292 fused_ordering(100) 00:19:55.292 fused_ordering(101) 00:19:55.292 fused_ordering(102) 00:19:55.292 fused_ordering(103) 00:19:55.292 fused_ordering(104) 00:19:55.292 fused_ordering(105) 00:19:55.292 fused_ordering(106) 00:19:55.293 fused_ordering(107) 00:19:55.293 fused_ordering(108) 00:19:55.293 fused_ordering(109) 00:19:55.293 fused_ordering(110) 00:19:55.293 fused_ordering(111) 00:19:55.293 fused_ordering(112) 00:19:55.293 fused_ordering(113) 00:19:55.293 fused_ordering(114) 00:19:55.293 fused_ordering(115) 00:19:55.293 fused_ordering(116) 00:19:55.293 fused_ordering(117) 00:19:55.293 fused_ordering(118) 00:19:55.293 fused_ordering(119) 00:19:55.293 fused_ordering(120) 00:19:55.293 fused_ordering(121) 00:19:55.293 fused_ordering(122) 00:19:55.293 fused_ordering(123) 00:19:55.293 fused_ordering(124) 00:19:55.293 fused_ordering(125) 00:19:55.293 fused_ordering(126) 00:19:55.293 fused_ordering(127) 00:19:55.293 fused_ordering(128) 00:19:55.293 fused_ordering(129) 00:19:55.293 fused_ordering(130) 00:19:55.293 fused_ordering(131) 00:19:55.293 fused_ordering(132) 00:19:55.293 fused_ordering(133) 00:19:55.293 fused_ordering(134) 00:19:55.293 fused_ordering(135) 00:19:55.293 fused_ordering(136) 00:19:55.293 fused_ordering(137) 00:19:55.293 fused_ordering(138) 00:19:55.293 fused_ordering(139) 00:19:55.293 fused_ordering(140) 00:19:55.293 fused_ordering(141) 00:19:55.293 fused_ordering(142) 00:19:55.293 fused_ordering(143) 00:19:55.293 fused_ordering(144) 00:19:55.293 fused_ordering(145) 00:19:55.293 fused_ordering(146) 00:19:55.293 fused_ordering(147) 00:19:55.293 fused_ordering(148) 00:19:55.293 fused_ordering(149) 00:19:55.293 fused_ordering(150) 00:19:55.293 fused_ordering(151) 00:19:55.293 fused_ordering(152) 00:19:55.293 fused_ordering(153) 00:19:55.293 fused_ordering(154) 00:19:55.293 fused_ordering(155) 00:19:55.293 fused_ordering(156) 00:19:55.293 fused_ordering(157) 00:19:55.293 fused_ordering(158) 00:19:55.293 fused_ordering(159) 00:19:55.293 fused_ordering(160) 00:19:55.293 fused_ordering(161) 00:19:55.293 fused_ordering(162) 00:19:55.293 fused_ordering(163) 00:19:55.293 fused_ordering(164) 00:19:55.293 fused_ordering(165) 00:19:55.293 fused_ordering(166) 00:19:55.293 fused_ordering(167) 00:19:55.293 fused_ordering(168) 00:19:55.293 fused_ordering(169) 00:19:55.293 fused_ordering(170) 00:19:55.293 fused_ordering(171) 00:19:55.293 fused_ordering(172) 00:19:55.293 fused_ordering(173) 00:19:55.293 fused_ordering(174) 00:19:55.293 fused_ordering(175) 00:19:55.293 fused_ordering(176) 00:19:55.293 fused_ordering(177) 00:19:55.293 fused_ordering(178) 00:19:55.293 fused_ordering(179) 00:19:55.293 fused_ordering(180) 00:19:55.293 fused_ordering(181) 00:19:55.293 fused_ordering(182) 00:19:55.293 fused_ordering(183) 00:19:55.293 fused_ordering(184) 00:19:55.293 fused_ordering(185) 00:19:55.293 fused_ordering(186) 00:19:55.293 fused_ordering(187) 00:19:55.293 fused_ordering(188) 00:19:55.293 fused_ordering(189) 00:19:55.293 fused_ordering(190) 00:19:55.293 fused_ordering(191) 00:19:55.293 fused_ordering(192) 00:19:55.293 fused_ordering(193) 00:19:55.293 fused_ordering(194) 00:19:55.293 fused_ordering(195) 00:19:55.293 fused_ordering(196) 00:19:55.293 fused_ordering(197) 00:19:55.293 fused_ordering(198) 00:19:55.293 fused_ordering(199) 00:19:55.293 fused_ordering(200) 00:19:55.293 fused_ordering(201) 00:19:55.293 fused_ordering(202) 00:19:55.293 fused_ordering(203) 00:19:55.293 fused_ordering(204) 00:19:55.293 fused_ordering(205) 00:19:55.551 fused_ordering(206) 00:19:55.551 fused_ordering(207) 00:19:55.551 fused_ordering(208) 00:19:55.551 fused_ordering(209) 00:19:55.551 fused_ordering(210) 00:19:55.551 fused_ordering(211) 00:19:55.551 fused_ordering(212) 00:19:55.551 fused_ordering(213) 00:19:55.551 fused_ordering(214) 00:19:55.551 fused_ordering(215) 00:19:55.551 fused_ordering(216) 00:19:55.551 fused_ordering(217) 00:19:55.551 fused_ordering(218) 00:19:55.551 fused_ordering(219) 00:19:55.551 fused_ordering(220) 00:19:55.551 fused_ordering(221) 00:19:55.551 fused_ordering(222) 00:19:55.551 fused_ordering(223) 00:19:55.551 fused_ordering(224) 00:19:55.551 fused_ordering(225) 00:19:55.551 fused_ordering(226) 00:19:55.551 fused_ordering(227) 00:19:55.551 fused_ordering(228) 00:19:55.551 fused_ordering(229) 00:19:55.551 fused_ordering(230) 00:19:55.551 fused_ordering(231) 00:19:55.551 fused_ordering(232) 00:19:55.551 fused_ordering(233) 00:19:55.551 fused_ordering(234) 00:19:55.551 fused_ordering(235) 00:19:55.551 fused_ordering(236) 00:19:55.551 fused_ordering(237) 00:19:55.551 fused_ordering(238) 00:19:55.551 fused_ordering(239) 00:19:55.551 fused_ordering(240) 00:19:55.551 fused_ordering(241) 00:19:55.551 fused_ordering(242) 00:19:55.551 fused_ordering(243) 00:19:55.551 fused_ordering(244) 00:19:55.551 fused_ordering(245) 00:19:55.551 fused_ordering(246) 00:19:55.551 fused_ordering(247) 00:19:55.551 fused_ordering(248) 00:19:55.551 fused_ordering(249) 00:19:55.551 fused_ordering(250) 00:19:55.551 fused_ordering(251) 00:19:55.551 fused_ordering(252) 00:19:55.551 fused_ordering(253) 00:19:55.551 fused_ordering(254) 00:19:55.551 fused_ordering(255) 00:19:55.551 fused_ordering(256) 00:19:55.551 fused_ordering(257) 00:19:55.551 fused_ordering(258) 00:19:55.551 fused_ordering(259) 00:19:55.551 fused_ordering(260) 00:19:55.551 fused_ordering(261) 00:19:55.551 fused_ordering(262) 00:19:55.551 fused_ordering(263) 00:19:55.551 fused_ordering(264) 00:19:55.551 fused_ordering(265) 00:19:55.551 fused_ordering(266) 00:19:55.551 fused_ordering(267) 00:19:55.551 fused_ordering(268) 00:19:55.551 fused_ordering(269) 00:19:55.551 fused_ordering(270) 00:19:55.551 fused_ordering(271) 00:19:55.551 fused_ordering(272) 00:19:55.551 fused_ordering(273) 00:19:55.551 fused_ordering(274) 00:19:55.551 fused_ordering(275) 00:19:55.551 fused_ordering(276) 00:19:55.551 fused_ordering(277) 00:19:55.551 fused_ordering(278) 00:19:55.551 fused_ordering(279) 00:19:55.551 fused_ordering(280) 00:19:55.551 fused_ordering(281) 00:19:55.551 fused_ordering(282) 00:19:55.551 fused_ordering(283) 00:19:55.551 fused_ordering(284) 00:19:55.551 fused_ordering(285) 00:19:55.551 fused_ordering(286) 00:19:55.551 fused_ordering(287) 00:19:55.551 fused_ordering(288) 00:19:55.551 fused_ordering(289) 00:19:55.551 fused_ordering(290) 00:19:55.551 fused_ordering(291) 00:19:55.551 fused_ordering(292) 00:19:55.551 fused_ordering(293) 00:19:55.551 fused_ordering(294) 00:19:55.551 fused_ordering(295) 00:19:55.551 fused_ordering(296) 00:19:55.551 fused_ordering(297) 00:19:55.551 fused_ordering(298) 00:19:55.551 fused_ordering(299) 00:19:55.551 fused_ordering(300) 00:19:55.551 fused_ordering(301) 00:19:55.551 fused_ordering(302) 00:19:55.551 fused_ordering(303) 00:19:55.551 fused_ordering(304) 00:19:55.551 fused_ordering(305) 00:19:55.551 fused_ordering(306) 00:19:55.551 fused_ordering(307) 00:19:55.551 fused_ordering(308) 00:19:55.551 fused_ordering(309) 00:19:55.551 fused_ordering(310) 00:19:55.551 fused_ordering(311) 00:19:55.551 fused_ordering(312) 00:19:55.551 fused_ordering(313) 00:19:55.551 fused_ordering(314) 00:19:55.551 fused_ordering(315) 00:19:55.551 fused_ordering(316) 00:19:55.551 fused_ordering(317) 00:19:55.551 fused_ordering(318) 00:19:55.551 fused_ordering(319) 00:19:55.551 fused_ordering(320) 00:19:55.551 fused_ordering(321) 00:19:55.551 fused_ordering(322) 00:19:55.551 fused_ordering(323) 00:19:55.551 fused_ordering(324) 00:19:55.551 fused_ordering(325) 00:19:55.551 fused_ordering(326) 00:19:55.551 fused_ordering(327) 00:19:55.551 fused_ordering(328) 00:19:55.551 fused_ordering(329) 00:19:55.551 fused_ordering(330) 00:19:55.551 fused_ordering(331) 00:19:55.551 fused_ordering(332) 00:19:55.551 fused_ordering(333) 00:19:55.551 fused_ordering(334) 00:19:55.551 fused_ordering(335) 00:19:55.551 fused_ordering(336) 00:19:55.551 fused_ordering(337) 00:19:55.551 fused_ordering(338) 00:19:55.551 fused_ordering(339) 00:19:55.551 fused_ordering(340) 00:19:55.551 fused_ordering(341) 00:19:55.551 fused_ordering(342) 00:19:55.551 fused_ordering(343) 00:19:55.551 fused_ordering(344) 00:19:55.551 fused_ordering(345) 00:19:55.551 fused_ordering(346) 00:19:55.551 fused_ordering(347) 00:19:55.551 fused_ordering(348) 00:19:55.551 fused_ordering(349) 00:19:55.551 fused_ordering(350) 00:19:55.551 fused_ordering(351) 00:19:55.551 fused_ordering(352) 00:19:55.551 fused_ordering(353) 00:19:55.551 fused_ordering(354) 00:19:55.551 fused_ordering(355) 00:19:55.551 fused_ordering(356) 00:19:55.551 fused_ordering(357) 00:19:55.551 fused_ordering(358) 00:19:55.551 fused_ordering(359) 00:19:55.551 fused_ordering(360) 00:19:55.551 fused_ordering(361) 00:19:55.551 fused_ordering(362) 00:19:55.551 fused_ordering(363) 00:19:55.551 fused_ordering(364) 00:19:55.551 fused_ordering(365) 00:19:55.551 fused_ordering(366) 00:19:55.551 fused_ordering(367) 00:19:55.551 fused_ordering(368) 00:19:55.551 fused_ordering(369) 00:19:55.551 fused_ordering(370) 00:19:55.551 fused_ordering(371) 00:19:55.551 fused_ordering(372) 00:19:55.551 fused_ordering(373) 00:19:55.551 fused_ordering(374) 00:19:55.551 fused_ordering(375) 00:19:55.551 fused_ordering(376) 00:19:55.551 fused_ordering(377) 00:19:55.551 fused_ordering(378) 00:19:55.551 fused_ordering(379) 00:19:55.551 fused_ordering(380) 00:19:55.551 fused_ordering(381) 00:19:55.551 fused_ordering(382) 00:19:55.551 fused_ordering(383) 00:19:55.551 fused_ordering(384) 00:19:55.551 fused_ordering(385) 00:19:55.551 fused_ordering(386) 00:19:55.551 fused_ordering(387) 00:19:55.551 fused_ordering(388) 00:19:55.551 fused_ordering(389) 00:19:55.551 fused_ordering(390) 00:19:55.551 fused_ordering(391) 00:19:55.551 fused_ordering(392) 00:19:55.551 fused_ordering(393) 00:19:55.551 fused_ordering(394) 00:19:55.551 fused_ordering(395) 00:19:55.551 fused_ordering(396) 00:19:55.551 fused_ordering(397) 00:19:55.552 fused_ordering(398) 00:19:55.552 fused_ordering(399) 00:19:55.552 fused_ordering(400) 00:19:55.552 fused_ordering(401) 00:19:55.552 fused_ordering(402) 00:19:55.552 fused_ordering(403) 00:19:55.552 fused_ordering(404) 00:19:55.552 fused_ordering(405) 00:19:55.552 fused_ordering(406) 00:19:55.552 fused_ordering(407) 00:19:55.552 fused_ordering(408) 00:19:55.552 fused_ordering(409) 00:19:55.552 fused_ordering(410) 00:19:55.552 fused_ordering(411) 00:19:55.552 fused_ordering(412) 00:19:55.552 fused_ordering(413) 00:19:55.552 fused_ordering(414) 00:19:55.552 fused_ordering(415) 00:19:55.552 fused_ordering(416) 00:19:55.552 fused_ordering(417) 00:19:55.552 fused_ordering(418) 00:19:55.552 fused_ordering(419) 00:19:55.552 fused_ordering(420) 00:19:55.552 fused_ordering(421) 00:19:55.552 fused_ordering(422) 00:19:55.552 fused_ordering(423) 00:19:55.552 fused_ordering(424) 00:19:55.552 fused_ordering(425) 00:19:55.552 fused_ordering(426) 00:19:55.552 fused_ordering(427) 00:19:55.552 fused_ordering(428) 00:19:55.552 fused_ordering(429) 00:19:55.552 fused_ordering(430) 00:19:55.552 fused_ordering(431) 00:19:55.552 fused_ordering(432) 00:19:55.552 fused_ordering(433) 00:19:55.552 fused_ordering(434) 00:19:55.552 fused_ordering(435) 00:19:55.552 fused_ordering(436) 00:19:55.552 fused_ordering(437) 00:19:55.552 fused_ordering(438) 00:19:55.552 fused_ordering(439) 00:19:55.552 fused_ordering(440) 00:19:55.552 fused_ordering(441) 00:19:55.552 fused_ordering(442) 00:19:55.552 fused_ordering(443) 00:19:55.552 fused_ordering(444) 00:19:55.552 fused_ordering(445) 00:19:55.552 fused_ordering(446) 00:19:55.552 fused_ordering(447) 00:19:55.552 fused_ordering(448) 00:19:55.552 fused_ordering(449) 00:19:55.552 fused_ordering(450) 00:19:55.552 fused_ordering(451) 00:19:55.552 fused_ordering(452) 00:19:55.552 fused_ordering(453) 00:19:55.552 fused_ordering(454) 00:19:55.552 fused_ordering(455) 00:19:55.552 fused_ordering(456) 00:19:55.552 fused_ordering(457) 00:19:55.552 fused_ordering(458) 00:19:55.552 fused_ordering(459) 00:19:55.552 fused_ordering(460) 00:19:55.552 fused_ordering(461) 00:19:55.552 fused_ordering(462) 00:19:55.552 fused_ordering(463) 00:19:55.552 fused_ordering(464) 00:19:55.552 fused_ordering(465) 00:19:55.552 fused_ordering(466) 00:19:55.552 fused_ordering(467) 00:19:55.552 fused_ordering(468) 00:19:55.552 fused_ordering(469) 00:19:55.552 fused_ordering(470) 00:19:55.552 fused_ordering(471) 00:19:55.552 fused_ordering(472) 00:19:55.552 fused_ordering(473) 00:19:55.552 fused_ordering(474) 00:19:55.552 fused_ordering(475) 00:19:55.552 fused_ordering(476) 00:19:55.552 fused_ordering(477) 00:19:55.552 fused_ordering(478) 00:19:55.552 fused_ordering(479) 00:19:55.552 fused_ordering(480) 00:19:55.552 fused_ordering(481) 00:19:55.552 fused_ordering(482) 00:19:55.552 fused_ordering(483) 00:19:55.552 fused_ordering(484) 00:19:55.552 fused_ordering(485) 00:19:55.552 fused_ordering(486) 00:19:55.552 fused_ordering(487) 00:19:55.552 fused_ordering(488) 00:19:55.552 fused_ordering(489) 00:19:55.552 fused_ordering(490) 00:19:55.552 fused_ordering(491) 00:19:55.552 fused_ordering(492) 00:19:55.552 fused_ordering(493) 00:19:55.552 fused_ordering(494) 00:19:55.552 fused_ordering(495) 00:19:55.552 fused_ordering(496) 00:19:55.552 fused_ordering(497) 00:19:55.552 fused_ordering(498) 00:19:55.552 fused_ordering(499) 00:19:55.552 fused_ordering(500) 00:19:55.552 fused_ordering(501) 00:19:55.552 fused_ordering(502) 00:19:55.552 fused_ordering(503) 00:19:55.552 fused_ordering(504) 00:19:55.552 fused_ordering(505) 00:19:55.552 fused_ordering(506) 00:19:55.552 fused_ordering(507) 00:19:55.552 fused_ordering(508) 00:19:55.552 fused_ordering(509) 00:19:55.552 fused_ordering(510) 00:19:55.552 fused_ordering(511) 00:19:55.552 fused_ordering(512) 00:19:55.552 fused_ordering(513) 00:19:55.552 fused_ordering(514) 00:19:55.552 fused_ordering(515) 00:19:55.552 fused_ordering(516) 00:19:55.552 fused_ordering(517) 00:19:55.552 fused_ordering(518) 00:19:55.552 fused_ordering(519) 00:19:55.552 fused_ordering(520) 00:19:55.552 fused_ordering(521) 00:19:55.552 fused_ordering(522) 00:19:55.552 fused_ordering(523) 00:19:55.552 fused_ordering(524) 00:19:55.552 fused_ordering(525) 00:19:55.552 fused_ordering(526) 00:19:55.552 fused_ordering(527) 00:19:55.552 fused_ordering(528) 00:19:55.552 fused_ordering(529) 00:19:55.552 fused_ordering(530) 00:19:55.552 fused_ordering(531) 00:19:55.552 fused_ordering(532) 00:19:55.552 fused_ordering(533) 00:19:55.552 fused_ordering(534) 00:19:55.552 fused_ordering(535) 00:19:55.552 fused_ordering(536) 00:19:55.552 fused_ordering(537) 00:19:55.552 fused_ordering(538) 00:19:55.552 fused_ordering(539) 00:19:55.552 fused_ordering(540) 00:19:55.552 fused_ordering(541) 00:19:55.552 fused_ordering(542) 00:19:55.552 fused_ordering(543) 00:19:55.552 fused_ordering(544) 00:19:55.552 fused_ordering(545) 00:19:55.552 fused_ordering(546) 00:19:55.552 fused_ordering(547) 00:19:55.552 fused_ordering(548) 00:19:55.552 fused_ordering(549) 00:19:55.552 fused_ordering(550) 00:19:55.552 fused_ordering(551) 00:19:55.552 fused_ordering(552) 00:19:55.552 fused_ordering(553) 00:19:55.552 fused_ordering(554) 00:19:55.552 fused_ordering(555) 00:19:55.552 fused_ordering(556) 00:19:55.552 fused_ordering(557) 00:19:55.552 fused_ordering(558) 00:19:55.552 fused_ordering(559) 00:19:55.552 fused_ordering(560) 00:19:55.552 fused_ordering(561) 00:19:55.552 fused_ordering(562) 00:19:55.552 fused_ordering(563) 00:19:55.552 fused_ordering(564) 00:19:55.552 fused_ordering(565) 00:19:55.552 fused_ordering(566) 00:19:55.552 fused_ordering(567) 00:19:55.552 fused_ordering(568) 00:19:55.552 fused_ordering(569) 00:19:55.552 fused_ordering(570) 00:19:55.552 fused_ordering(571) 00:19:55.552 fused_ordering(572) 00:19:55.552 fused_ordering(573) 00:19:55.552 fused_ordering(574) 00:19:55.552 fused_ordering(575) 00:19:55.552 fused_ordering(576) 00:19:55.552 fused_ordering(577) 00:19:55.552 fused_ordering(578) 00:19:55.552 fused_ordering(579) 00:19:55.552 fused_ordering(580) 00:19:55.552 fused_ordering(581) 00:19:55.552 fused_ordering(582) 00:19:55.552 fused_ordering(583) 00:19:55.552 fused_ordering(584) 00:19:55.552 fused_ordering(585) 00:19:55.552 fused_ordering(586) 00:19:55.552 fused_ordering(587) 00:19:55.552 fused_ordering(588) 00:19:55.552 fused_ordering(589) 00:19:55.552 fused_ordering(590) 00:19:55.552 fused_ordering(591) 00:19:55.552 fused_ordering(592) 00:19:55.552 fused_ordering(593) 00:19:55.552 fused_ordering(594) 00:19:55.552 fused_ordering(595) 00:19:55.552 fused_ordering(596) 00:19:55.552 fused_ordering(597) 00:19:55.552 fused_ordering(598) 00:19:55.552 fused_ordering(599) 00:19:55.552 fused_ordering(600) 00:19:55.552 fused_ordering(601) 00:19:55.552 fused_ordering(602) 00:19:55.552 fused_ordering(603) 00:19:55.552 fused_ordering(604) 00:19:55.552 fused_ordering(605) 00:19:55.552 fused_ordering(606) 00:19:55.552 fused_ordering(607) 00:19:55.552 fused_ordering(608) 00:19:55.552 fused_ordering(609) 00:19:55.552 fused_ordering(610) 00:19:55.552 fused_ordering(611) 00:19:55.552 fused_ordering(612) 00:19:55.552 fused_ordering(613) 00:19:55.552 fused_ordering(614) 00:19:55.552 fused_ordering(615) 00:19:55.810 fused_ordering(616) 00:19:55.810 fused_ordering(617) 00:19:55.810 fused_ordering(618) 00:19:55.810 fused_ordering(619) 00:19:55.810 fused_ordering(620) 00:19:55.810 fused_ordering(621) 00:19:55.810 fused_ordering(622) 00:19:55.810 fused_ordering(623) 00:19:55.810 fused_ordering(624) 00:19:55.810 fused_ordering(625) 00:19:55.810 fused_ordering(626) 00:19:55.810 fused_ordering(627) 00:19:55.810 fused_ordering(628) 00:19:55.810 fused_ordering(629) 00:19:55.810 fused_ordering(630) 00:19:55.810 fused_ordering(631) 00:19:55.810 fused_ordering(632) 00:19:55.810 fused_ordering(633) 00:19:55.810 fused_ordering(634) 00:19:55.810 fused_ordering(635) 00:19:55.810 fused_ordering(636) 00:19:55.810 fused_ordering(637) 00:19:55.810 fused_ordering(638) 00:19:55.810 fused_ordering(639) 00:19:55.810 fused_ordering(640) 00:19:55.810 fused_ordering(641) 00:19:55.810 fused_ordering(642) 00:19:55.810 fused_ordering(643) 00:19:55.810 fused_ordering(644) 00:19:55.810 fused_ordering(645) 00:19:55.810 fused_ordering(646) 00:19:55.810 fused_ordering(647) 00:19:55.810 fused_ordering(648) 00:19:55.810 fused_ordering(649) 00:19:55.810 fused_ordering(650) 00:19:55.810 fused_ordering(651) 00:19:55.810 fused_ordering(652) 00:19:55.810 fused_ordering(653) 00:19:55.810 fused_ordering(654) 00:19:55.810 fused_ordering(655) 00:19:55.810 fused_ordering(656) 00:19:55.810 fused_ordering(657) 00:19:55.810 fused_ordering(658) 00:19:55.810 fused_ordering(659) 00:19:55.810 fused_ordering(660) 00:19:55.810 fused_ordering(661) 00:19:55.810 fused_ordering(662) 00:19:55.810 fused_ordering(663) 00:19:55.810 fused_ordering(664) 00:19:55.810 fused_ordering(665) 00:19:55.810 fused_ordering(666) 00:19:55.810 fused_ordering(667) 00:19:55.810 fused_ordering(668) 00:19:55.810 fused_ordering(669) 00:19:55.810 fused_ordering(670) 00:19:55.810 fused_ordering(671) 00:19:55.810 fused_ordering(672) 00:19:55.810 fused_ordering(673) 00:19:55.810 fused_ordering(674) 00:19:55.810 fused_ordering(675) 00:19:55.810 fused_ordering(676) 00:19:55.810 fused_ordering(677) 00:19:55.810 fused_ordering(678) 00:19:55.810 fused_ordering(679) 00:19:55.810 fused_ordering(680) 00:19:55.810 fused_ordering(681) 00:19:55.810 fused_ordering(682) 00:19:55.810 fused_ordering(683) 00:19:55.810 fused_ordering(684) 00:19:55.810 fused_ordering(685) 00:19:55.810 fused_ordering(686) 00:19:55.810 fused_ordering(687) 00:19:55.810 fused_ordering(688) 00:19:55.810 fused_ordering(689) 00:19:55.810 fused_ordering(690) 00:19:55.810 fused_ordering(691) 00:19:55.810 fused_ordering(692) 00:19:55.810 fused_ordering(693) 00:19:55.810 fused_ordering(694) 00:19:55.810 fused_ordering(695) 00:19:55.810 fused_ordering(696) 00:19:55.810 fused_ordering(697) 00:19:55.810 fused_ordering(698) 00:19:55.810 fused_ordering(699) 00:19:55.810 fused_ordering(700) 00:19:55.810 fused_ordering(701) 00:19:55.810 fused_ordering(702) 00:19:55.810 fused_ordering(703) 00:19:55.810 fused_ordering(704) 00:19:55.810 fused_ordering(705) 00:19:55.810 fused_ordering(706) 00:19:55.810 fused_ordering(707) 00:19:55.810 fused_ordering(708) 00:19:55.810 fused_ordering(709) 00:19:55.810 fused_ordering(710) 00:19:55.810 fused_ordering(711) 00:19:55.810 fused_ordering(712) 00:19:55.810 fused_ordering(713) 00:19:55.810 fused_ordering(714) 00:19:55.810 fused_ordering(715) 00:19:55.810 fused_ordering(716) 00:19:55.810 fused_ordering(717) 00:19:55.810 fused_ordering(718) 00:19:55.810 fused_ordering(719) 00:19:55.810 fused_ordering(720) 00:19:55.810 fused_ordering(721) 00:19:55.810 fused_ordering(722) 00:19:55.810 fused_ordering(723) 00:19:55.810 fused_ordering(724) 00:19:55.810 fused_ordering(725) 00:19:55.810 fused_ordering(726) 00:19:55.810 fused_ordering(727) 00:19:55.810 fused_ordering(728) 00:19:55.810 fused_ordering(729) 00:19:55.810 fused_ordering(730) 00:19:55.810 fused_ordering(731) 00:19:55.810 fused_ordering(732) 00:19:55.810 fused_ordering(733) 00:19:55.810 fused_ordering(734) 00:19:55.810 fused_ordering(735) 00:19:55.810 fused_ordering(736) 00:19:55.810 fused_ordering(737) 00:19:55.810 fused_ordering(738) 00:19:55.810 fused_ordering(739) 00:19:55.810 fused_ordering(740) 00:19:55.810 fused_ordering(741) 00:19:55.810 fused_ordering(742) 00:19:55.810 fused_ordering(743) 00:19:55.810 fused_ordering(744) 00:19:55.810 fused_ordering(745) 00:19:55.810 fused_ordering(746) 00:19:55.810 fused_ordering(747) 00:19:55.810 fused_ordering(748) 00:19:55.810 fused_ordering(749) 00:19:55.810 fused_ordering(750) 00:19:55.810 fused_ordering(751) 00:19:55.811 fused_ordering(752) 00:19:55.811 fused_ordering(753) 00:19:55.811 fused_ordering(754) 00:19:55.811 fused_ordering(755) 00:19:55.811 fused_ordering(756) 00:19:55.811 fused_ordering(757) 00:19:55.811 fused_ordering(758) 00:19:55.811 fused_ordering(759) 00:19:55.811 fused_ordering(760) 00:19:55.811 fused_ordering(761) 00:19:55.811 fused_ordering(762) 00:19:55.811 fused_ordering(763) 00:19:55.811 fused_ordering(764) 00:19:55.811 fused_ordering(765) 00:19:55.811 fused_ordering(766) 00:19:55.811 fused_ordering(767) 00:19:55.811 fused_ordering(768) 00:19:55.811 fused_ordering(769) 00:19:55.811 fused_ordering(770) 00:19:55.811 fused_ordering(771) 00:19:55.811 fused_ordering(772) 00:19:55.811 fused_ordering(773) 00:19:55.811 fused_ordering(774) 00:19:55.811 fused_ordering(775) 00:19:55.811 fused_ordering(776) 00:19:55.811 fused_ordering(777) 00:19:55.811 fused_ordering(778) 00:19:55.811 fused_ordering(779) 00:19:55.811 fused_ordering(780) 00:19:55.811 fused_ordering(781) 00:19:55.811 fused_ordering(782) 00:19:55.811 fused_ordering(783) 00:19:55.811 fused_ordering(784) 00:19:55.811 fused_ordering(785) 00:19:55.811 fused_ordering(786) 00:19:55.811 fused_ordering(787) 00:19:55.811 fused_ordering(788) 00:19:55.811 fused_ordering(789) 00:19:55.811 fused_ordering(790) 00:19:55.811 fused_ordering(791) 00:19:55.811 fused_ordering(792) 00:19:55.811 fused_ordering(793) 00:19:55.811 fused_ordering(794) 00:19:55.811 fused_ordering(795) 00:19:55.811 fused_ordering(796) 00:19:55.811 fused_ordering(797) 00:19:55.811 fused_ordering(798) 00:19:55.811 fused_ordering(799) 00:19:55.811 fused_ordering(800) 00:19:55.811 fused_ordering(801) 00:19:55.811 fused_ordering(802) 00:19:55.811 fused_ordering(803) 00:19:55.811 fused_ordering(804) 00:19:55.811 fused_ordering(805) 00:19:55.811 fused_ordering(806) 00:19:55.811 fused_ordering(807) 00:19:55.811 fused_ordering(808) 00:19:55.811 fused_ordering(809) 00:19:55.811 fused_ordering(810) 00:19:55.811 fused_ordering(811) 00:19:55.811 fused_ordering(812) 00:19:55.811 fused_ordering(813) 00:19:55.811 fused_ordering(814) 00:19:55.811 fused_ordering(815) 00:19:55.811 fused_ordering(816) 00:19:55.811 fused_ordering(817) 00:19:55.811 fused_ordering(818) 00:19:55.811 fused_ordering(819) 00:19:55.811 fused_ordering(820) 00:19:56.069 fused_ordering(821) 00:19:56.069 fused_ordering(822) 00:19:56.069 fused_ordering(823) 00:19:56.069 fused_ordering(824) 00:19:56.069 fused_ordering(825) 00:19:56.069 fused_ordering(826) 00:19:56.069 fused_ordering(827) 00:19:56.069 fused_ordering(828) 00:19:56.069 fused_ordering(829) 00:19:56.069 fused_ordering(830) 00:19:56.069 fused_ordering(831) 00:19:56.069 fused_ordering(832) 00:19:56.069 fused_ordering(833) 00:19:56.069 fused_ordering(834) 00:19:56.069 fused_ordering(835) 00:19:56.069 fused_ordering(836) 00:19:56.069 fused_ordering(837) 00:19:56.069 fused_ordering(838) 00:19:56.069 fused_ordering(839) 00:19:56.069 fused_ordering(840) 00:19:56.069 fused_ordering(841) 00:19:56.069 fused_ordering(842) 00:19:56.069 fused_ordering(843) 00:19:56.069 fused_ordering(844) 00:19:56.069 fused_ordering(845) 00:19:56.069 fused_ordering(846) 00:19:56.069 fused_ordering(847) 00:19:56.069 fused_ordering(848) 00:19:56.069 fused_ordering(849) 00:19:56.069 fused_ordering(850) 00:19:56.069 fused_ordering(851) 00:19:56.069 fused_ordering(852) 00:19:56.069 fused_ordering(853) 00:19:56.069 fused_ordering(854) 00:19:56.069 fused_ordering(855) 00:19:56.069 fused_ordering(856) 00:19:56.069 fused_ordering(857) 00:19:56.069 fused_ordering(858) 00:19:56.069 fused_ordering(859) 00:19:56.069 fused_ordering(860) 00:19:56.069 fused_ordering(861) 00:19:56.069 fused_ordering(862) 00:19:56.069 fused_ordering(863) 00:19:56.069 fused_ordering(864) 00:19:56.069 fused_ordering(865) 00:19:56.069 fused_ordering(866) 00:19:56.069 fused_ordering(867) 00:19:56.069 fused_ordering(868) 00:19:56.069 fused_ordering(869) 00:19:56.069 fused_ordering(870) 00:19:56.069 fused_ordering(871) 00:19:56.069 fused_ordering(872) 00:19:56.069 fused_ordering(873) 00:19:56.069 fused_ordering(874) 00:19:56.069 fused_ordering(875) 00:19:56.069 fused_ordering(876) 00:19:56.069 fused_ordering(877) 00:19:56.069 fused_ordering(878) 00:19:56.069 fused_ordering(879) 00:19:56.069 fused_ordering(880) 00:19:56.069 fused_ordering(881) 00:19:56.069 fused_ordering(882) 00:19:56.069 fused_ordering(883) 00:19:56.069 fused_ordering(884) 00:19:56.069 fused_ordering(885) 00:19:56.069 fused_ordering(886) 00:19:56.069 fused_ordering(887) 00:19:56.069 fused_ordering(888) 00:19:56.069 fused_ordering(889) 00:19:56.069 fused_ordering(890) 00:19:56.069 fused_ordering(891) 00:19:56.069 fused_ordering(892) 00:19:56.069 fused_ordering(893) 00:19:56.069 fused_ordering(894) 00:19:56.069 fused_ordering(895) 00:19:56.069 fused_ordering(896) 00:19:56.069 fused_ordering(897) 00:19:56.069 fused_ordering(898) 00:19:56.069 fused_ordering(899) 00:19:56.069 fused_ordering(900) 00:19:56.069 fused_ordering(901) 00:19:56.069 fused_ordering(902) 00:19:56.069 fused_ordering(903) 00:19:56.069 fused_ordering(904) 00:19:56.069 fused_ordering(905) 00:19:56.069 fused_ordering(906) 00:19:56.069 fused_ordering(907) 00:19:56.069 fused_ordering(908) 00:19:56.069 fused_ordering(909) 00:19:56.069 fused_ordering(910) 00:19:56.069 fused_ordering(911) 00:19:56.069 fused_ordering(912) 00:19:56.069 fused_ordering(913) 00:19:56.069 fused_ordering(914) 00:19:56.069 fused_ordering(915) 00:19:56.069 fused_ordering(916) 00:19:56.069 fused_ordering(917) 00:19:56.069 fused_ordering(918) 00:19:56.069 fused_ordering(919) 00:19:56.069 fused_ordering(920) 00:19:56.069 fused_ordering(921) 00:19:56.069 fused_ordering(922) 00:19:56.069 fused_ordering(923) 00:19:56.069 fused_ordering(924) 00:19:56.069 fused_ordering(925) 00:19:56.069 fused_ordering(926) 00:19:56.069 fused_ordering(927) 00:19:56.069 fused_ordering(928) 00:19:56.069 fused_ordering(929) 00:19:56.069 fused_ordering(930) 00:19:56.069 fused_ordering(931) 00:19:56.069 fused_ordering(932) 00:19:56.069 fused_ordering(933) 00:19:56.069 fused_ordering(934) 00:19:56.069 fused_ordering(935) 00:19:56.069 fused_ordering(936) 00:19:56.069 fused_ordering(937) 00:19:56.069 fused_ordering(938) 00:19:56.069 fused_ordering(939) 00:19:56.069 fused_ordering(940) 00:19:56.069 fused_ordering(941) 00:19:56.069 fused_ordering(942) 00:19:56.069 fused_ordering(943) 00:19:56.069 fused_ordering(944) 00:19:56.069 fused_ordering(945) 00:19:56.069 fused_ordering(946) 00:19:56.069 fused_ordering(947) 00:19:56.069 fused_ordering(948) 00:19:56.069 fused_ordering(949) 00:19:56.069 fused_ordering(950) 00:19:56.069 fused_ordering(951) 00:19:56.069 fused_ordering(952) 00:19:56.069 fused_ordering(953) 00:19:56.069 fused_ordering(954) 00:19:56.069 fused_ordering(955) 00:19:56.069 fused_ordering(956) 00:19:56.069 fused_ordering(957) 00:19:56.069 fused_ordering(958) 00:19:56.069 fused_ordering(959) 00:19:56.069 fused_ordering(960) 00:19:56.069 fused_ordering(961) 00:19:56.069 fused_ordering(962) 00:19:56.069 fused_ordering(963) 00:19:56.069 fused_ordering(964) 00:19:56.069 fused_ordering(965) 00:19:56.069 fused_ordering(966) 00:19:56.069 fused_ordering(967) 00:19:56.069 fused_ordering(968) 00:19:56.069 fused_ordering(969) 00:19:56.069 fused_ordering(970) 00:19:56.069 fused_ordering(971) 00:19:56.070 fused_ordering(972) 00:19:56.070 fused_ordering(973) 00:19:56.070 fused_ordering(974) 00:19:56.070 fused_ordering(975) 00:19:56.070 fused_ordering(976) 00:19:56.070 fused_ordering(977) 00:19:56.070 fused_ordering(978) 00:19:56.070 fused_ordering(979) 00:19:56.070 fused_ordering(980) 00:19:56.070 fused_ordering(981) 00:19:56.070 fused_ordering(982) 00:19:56.070 fused_ordering(983) 00:19:56.070 fused_ordering(984) 00:19:56.070 fused_ordering(985) 00:19:56.070 fused_ordering(986) 00:19:56.070 fused_ordering(987) 00:19:56.070 fused_ordering(988) 00:19:56.070 fused_ordering(989) 00:19:56.070 fused_ordering(990) 00:19:56.070 fused_ordering(991) 00:19:56.070 fused_ordering(992) 00:19:56.070 fused_ordering(993) 00:19:56.070 fused_ordering(994) 00:19:56.070 fused_ordering(995) 00:19:56.070 fused_ordering(996) 00:19:56.070 fused_ordering(997) 00:19:56.070 fused_ordering(998) 00:19:56.070 fused_ordering(999) 00:19:56.070 fused_ordering(1000) 00:19:56.070 fused_ordering(1001) 00:19:56.070 fused_ordering(1002) 00:19:56.070 fused_ordering(1003) 00:19:56.070 fused_ordering(1004) 00:19:56.070 fused_ordering(1005) 00:19:56.070 fused_ordering(1006) 00:19:56.070 fused_ordering(1007) 00:19:56.070 fused_ordering(1008) 00:19:56.070 fused_ordering(1009) 00:19:56.070 fused_ordering(1010) 00:19:56.070 fused_ordering(1011) 00:19:56.070 fused_ordering(1012) 00:19:56.070 fused_ordering(1013) 00:19:56.070 fused_ordering(1014) 00:19:56.070 fused_ordering(1015) 00:19:56.070 fused_ordering(1016) 00:19:56.070 fused_ordering(1017) 00:19:56.070 fused_ordering(1018) 00:19:56.070 fused_ordering(1019) 00:19:56.070 fused_ordering(1020) 00:19:56.070 fused_ordering(1021) 00:19:56.070 fused_ordering(1022) 00:19:56.070 fused_ordering(1023) 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:56.070 rmmod nvme_rdma 00:19:56.070 rmmod nvme_fabrics 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3333610 ']' 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3333610 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3333610 ']' 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3333610 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3333610 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3333610' 00:19:56.070 killing process with pid 3333610 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3333610 00:19:56.070 15:56:41 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3333610 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:19:57.441 00:19:57.441 real 0m10.625s 00:19:57.441 user 0m6.185s 00:19:57.441 sys 0m6.074s 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:57.441 ************************************ 00:19:57.441 END TEST nvmf_fused_ordering 00:19:57.441 ************************************ 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:57.441 ************************************ 00:19:57.441 START TEST nvmf_ns_masking 00:19:57.441 ************************************ 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:19:57.441 * Looking for test storage... 00:19:57.441 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:57.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.441 --rc genhtml_branch_coverage=1 00:19:57.441 --rc genhtml_function_coverage=1 00:19:57.441 --rc genhtml_legend=1 00:19:57.441 --rc geninfo_all_blocks=1 00:19:57.441 --rc geninfo_unexecuted_blocks=1 00:19:57.441 00:19:57.441 ' 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:57.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.441 --rc genhtml_branch_coverage=1 00:19:57.441 --rc genhtml_function_coverage=1 00:19:57.441 --rc genhtml_legend=1 00:19:57.441 --rc geninfo_all_blocks=1 00:19:57.441 --rc geninfo_unexecuted_blocks=1 00:19:57.441 00:19:57.441 ' 00:19:57.441 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:57.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.441 --rc genhtml_branch_coverage=1 00:19:57.441 --rc genhtml_function_coverage=1 00:19:57.441 --rc genhtml_legend=1 00:19:57.441 --rc geninfo_all_blocks=1 00:19:57.441 --rc geninfo_unexecuted_blocks=1 00:19:57.442 00:19:57.442 ' 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:57.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.442 --rc genhtml_branch_coverage=1 00:19:57.442 --rc genhtml_function_coverage=1 00:19:57.442 --rc genhtml_legend=1 00:19:57.442 --rc geninfo_all_blocks=1 00:19:57.442 --rc geninfo_unexecuted_blocks=1 00:19:57.442 00:19:57.442 ' 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:57.442 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:57.700 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0fc2111b-e716-433f-aa19-64d787251a1c 00:19:57.700 15:56:42 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=66ee5199-b53b-4d9f-ab9e-290a88560ca2 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1a432b4f-5f6b-46a4-b5e1-23634fd81975 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:19:57.700 15:56:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.253 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:04.254 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:04.254 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:04.254 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:04.254 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # rdma_device_init 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.254 15:56:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:04.254 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:04.254 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:04.254 altname enp217s0f0np0 00:20:04.254 altname ens818f0np0 00:20:04.254 inet 192.168.100.8/24 scope global mlx_0_0 00:20:04.254 valid_lft forever preferred_lft forever 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:04.254 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:04.254 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:04.254 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:04.254 altname enp217s0f1np1 00:20:04.254 altname ens818f1np1 00:20:04.254 inet 192.168.100.9/24 scope global mlx_0_1 00:20:04.254 valid_lft forever preferred_lft forever 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:04.255 192.168.100.9' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:04.255 192.168.100.9' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # head -n 1 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:04.255 192.168.100.9' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # tail -n +2 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # head -n 1 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3337333 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3337333 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3337333 ']' 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:04.255 15:56:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:04.255 [2024-11-17 15:56:49.212531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:04.255 [2024-11-17 15:56:49.212633] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.255 [2024-11-17 15:56:49.343834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.255 [2024-11-17 15:56:49.438410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.255 [2024-11-17 15:56:49.438456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.255 [2024-11-17 15:56:49.438467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.255 [2024-11-17 15:56:49.438480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.255 [2024-11-17 15:56:49.438489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.255 [2024-11-17 15:56:49.439797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.513 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.513 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:20:04.513 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.513 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.513 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:04.513 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.513 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:04.770 [2024-11-17 15:56:50.249134] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f7f71b1d940) succeed. 00:20:04.770 [2024-11-17 15:56:50.258955] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f7f711bd940) succeed. 00:20:05.027 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:20:05.027 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:20:05.027 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:05.284 Malloc1 00:20:05.284 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:05.541 Malloc2 00:20:05.541 15:56:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:05.541 15:56:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:20:05.799 15:56:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:06.056 [2024-11-17 15:56:51.391921] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:06.056 15:56:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:20:06.056 15:56:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1a432b4f-5f6b-46a4-b5e1-23634fd81975 -a 192.168.100.8 -s 4420 -i 4 00:20:06.313 15:56:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:20:06.313 15:56:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:06.313 15:56:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:06.313 15:56:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:06.313 15:56:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:08.207 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:08.207 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:08.207 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:08.464 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:08.464 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:08.464 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:08.465 [ 0]:0x1 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f16a889fab545c8aa0370f9f77342b7 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f16a889fab545c8aa0370f9f77342b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:08.465 15:56:53 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:08.722 [ 0]:0x1 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f16a889fab545c8aa0370f9f77342b7 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f16a889fab545c8aa0370f9f77342b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:08.722 [ 1]:0x2 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1402d56b74244e7a891435cd69ecb408 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1402d56b74244e7a891435cd69ecb408 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:20:08.722 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:08.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:08.980 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:09.237 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:20:09.494 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:20:09.494 15:56:54 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1a432b4f-5f6b-46a4-b5e1-23634fd81975 -a 192.168.100.8 -s 4420 -i 4 00:20:09.751 15:56:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:20:09.751 15:56:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:09.751 15:56:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:09.751 15:56:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:20:09.751 15:56:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:20:09.751 15:56:55 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:11.738 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:11.738 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:11.738 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:11.738 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:11.738 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:11.739 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:11.997 [ 0]:0x2 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1402d56b74244e7a891435cd69ecb408 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1402d56b74244e7a891435cd69ecb408 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:11.997 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:20:12.255 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:12.255 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:12.255 [ 0]:0x1 00:20:12.255 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:12.255 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:12.255 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f16a889fab545c8aa0370f9f77342b7 00:20:12.256 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f16a889fab545c8aa0370f9f77342b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:12.256 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:20:12.256 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:12.256 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:12.256 [ 1]:0x2 00:20:12.256 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:12.256 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:12.256 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1402d56b74244e7a891435cd69ecb408 00:20:12.256 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1402d56b74244e7a891435cd69ecb408 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:12.256 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:12.514 [ 0]:0x2 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1402d56b74244e7a891435cd69ecb408 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1402d56b74244e7a891435cd69ecb408 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:20:12.514 15:56:57 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:12.773 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:12.773 15:56:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:13.031 15:56:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:20:13.031 15:56:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1a432b4f-5f6b-46a4-b5e1-23634fd81975 -a 192.168.100.8 -s 4420 -i 4 00:20:13.289 15:56:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:13.289 15:56:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:20:13.289 15:56:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:13.289 15:56:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:20:13.289 15:56:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:20:13.289 15:56:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:15.819 [ 0]:0x1 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f16a889fab545c8aa0370f9f77342b7 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f16a889fab545c8aa0370f9f77342b7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:15.819 [ 1]:0x2 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1402d56b74244e7a891435cd69ecb408 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1402d56b74244e7a891435cd69ecb408 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:15.819 15:57:00 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:15.819 [ 0]:0x2 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1402d56b74244e7a891435cd69ecb408 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1402d56b74244e7a891435cd69ecb408 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.819 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:15.820 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.820 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:15.820 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:20:15.820 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:20:16.079 [2024-11-17 15:57:01.410334] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:20:16.079 request: 00:20:16.079 { 00:20:16.079 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.079 "nsid": 2, 00:20:16.079 "host": "nqn.2016-06.io.spdk:host1", 00:20:16.079 "method": "nvmf_ns_remove_host", 00:20:16.079 "req_id": 1 00:20:16.079 } 00:20:16.079 Got JSON-RPC error response 00:20:16.079 response: 00:20:16.079 { 00:20:16.079 "code": -32602, 00:20:16.079 "message": "Invalid parameters" 00:20:16.079 } 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:20:16.079 [ 0]:0x2 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1402d56b74244e7a891435cd69ecb408 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1402d56b74244e7a891435cd69ecb408 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:20:16.079 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:16.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:16.338 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3339756 00:20:16.338 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:20:16.338 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.338 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3339756 /var/tmp/host.sock 00:20:16.338 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3339756 ']' 00:20:16.338 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:16.338 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.338 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:16.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:16.338 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.338 15:57:01 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:16.597 [2024-11-17 15:57:01.942487] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:16.597 [2024-11-17 15:57:01.942587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3339756 ] 00:20:16.597 [2024-11-17 15:57:02.073738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.854 [2024-11-17 15:57:02.174244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.420 15:57:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:17.420 15:57:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:20:17.420 15:57:02 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:17.677 15:57:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:17.935 15:57:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0fc2111b-e716-433f-aa19-64d787251a1c 00:20:17.935 15:57:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:17.935 15:57:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0FC2111BE716433FAA1964D787251A1C -i 00:20:18.193 15:57:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 66ee5199-b53b-4d9f-ab9e-290a88560ca2 00:20:18.193 15:57:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:18.193 15:57:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 66EE5199B53B4D9FAB9E290A88560CA2 -i 00:20:18.451 15:57:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:20:18.451 15:57:03 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:20:18.709 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:18.709 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:20:18.967 nvme0n1 00:20:18.967 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:18.967 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:20:19.225 nvme1n2 00:20:19.225 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:20:19.225 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:20:19.225 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:19.225 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:20:19.225 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:20:19.483 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:20:19.483 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:20:19.483 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:20:19.483 15:57:04 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:20:19.741 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0fc2111b-e716-433f-aa19-64d787251a1c == \0\f\c\2\1\1\1\b\-\e\7\1\6\-\4\3\3\f\-\a\a\1\9\-\6\4\d\7\8\7\2\5\1\a\1\c ]] 00:20:19.741 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:20:19.741 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:20:19.741 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:20:20.000 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 66ee5199-b53b-4d9f-ab9e-290a88560ca2 == \6\6\e\e\5\1\9\9\-\b\5\3\b\-\4\d\9\f\-\a\b\9\e\-\2\9\0\a\8\8\5\6\0\c\a\2 ]] 00:20:20.000 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.000 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 0fc2111b-e716-433f-aa19-64d787251a1c 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0FC2111BE716433FAA1964D787251A1C 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0FC2111BE716433FAA1964D787251A1C 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:20:20.257 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 0FC2111BE716433FAA1964D787251A1C 00:20:20.514 [2024-11-17 15:57:05.851508] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:20:20.514 [2024-11-17 15:57:05.851549] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:20:20.514 [2024-11-17 15:57:05.851570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:20:20.514 request: 00:20:20.514 { 00:20:20.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.514 "namespace": { 00:20:20.514 "bdev_name": "invalid", 00:20:20.514 "nsid": 1, 00:20:20.514 "nguid": "0FC2111BE716433FAA1964D787251A1C", 00:20:20.514 "no_auto_visible": false 00:20:20.514 }, 00:20:20.514 "method": "nvmf_subsystem_add_ns", 00:20:20.514 "req_id": 1 00:20:20.514 } 00:20:20.514 Got JSON-RPC error response 00:20:20.514 response: 00:20:20.514 { 00:20:20.514 "code": -32602, 00:20:20.514 "message": "Invalid parameters" 00:20:20.514 } 00:20:20.514 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:20:20.514 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:20.514 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:20.514 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:20.514 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 0fc2111b-e716-433f-aa19-64d787251a1c 00:20:20.514 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:20:20.514 15:57:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0FC2111BE716433FAA1964D787251A1C -i 00:20:20.772 15:57:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:20:22.673 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:20:22.673 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:20:22.673 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3339756 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3339756 ']' 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3339756 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3339756 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3339756' 00:20:22.932 killing process with pid 3339756 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3339756 00:20:22.932 15:57:08 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3339756 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:25.461 rmmod nvme_rdma 00:20:25.461 rmmod nvme_fabrics 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3337333 ']' 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3337333 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3337333 ']' 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3337333 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3337333 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3337333' 00:20:25.461 killing process with pid 3337333 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3337333 00:20:25.461 15:57:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3337333 00:20:26.837 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.837 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:26.837 00:20:26.837 real 0m29.483s 00:20:26.837 user 0m38.763s 00:20:26.837 sys 0m7.553s 00:20:26.837 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.837 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:20:26.837 ************************************ 00:20:26.837 END TEST nvmf_ns_masking 00:20:26.837 ************************************ 00:20:26.837 15:57:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:20:26.837 15:57:12 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:20:26.837 15:57:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:26.837 15:57:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.837 15:57:12 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:26.837 ************************************ 00:20:26.837 START TEST nvmf_nvme_cli 00:20:26.837 ************************************ 00:20:26.837 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:20:27.095 * Looking for test storage... 00:20:27.095 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:20:27.095 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:27.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.096 --rc genhtml_branch_coverage=1 00:20:27.096 --rc genhtml_function_coverage=1 00:20:27.096 --rc genhtml_legend=1 00:20:27.096 --rc geninfo_all_blocks=1 00:20:27.096 --rc geninfo_unexecuted_blocks=1 00:20:27.096 00:20:27.096 ' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:27.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.096 --rc genhtml_branch_coverage=1 00:20:27.096 --rc genhtml_function_coverage=1 00:20:27.096 --rc genhtml_legend=1 00:20:27.096 --rc geninfo_all_blocks=1 00:20:27.096 --rc geninfo_unexecuted_blocks=1 00:20:27.096 00:20:27.096 ' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:27.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.096 --rc genhtml_branch_coverage=1 00:20:27.096 --rc genhtml_function_coverage=1 00:20:27.096 --rc genhtml_legend=1 00:20:27.096 --rc geninfo_all_blocks=1 00:20:27.096 --rc geninfo_unexecuted_blocks=1 00:20:27.096 00:20:27.096 ' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:27.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.096 --rc genhtml_branch_coverage=1 00:20:27.096 --rc genhtml_function_coverage=1 00:20:27.096 --rc genhtml_legend=1 00:20:27.096 --rc geninfo_all_blocks=1 00:20:27.096 --rc geninfo_unexecuted_blocks=1 00:20:27.096 00:20:27.096 ' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:27.096 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:20:27.096 15:57:12 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:33.653 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.653 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:20:33.653 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:33.653 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:33.653 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:33.654 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:33.654 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:33.654 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:33.654 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # rdma_device_init 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:33.654 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:33.655 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:33.655 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:33.655 altname enp217s0f0np0 00:20:33.655 altname ens818f0np0 00:20:33.655 inet 192.168.100.8/24 scope global mlx_0_0 00:20:33.655 valid_lft forever preferred_lft forever 00:20:33.655 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:33.655 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:33.655 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:33.655 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:33.655 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:33.655 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:33.655 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:33.655 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:33.655 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:33.913 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:33.913 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:33.913 altname enp217s0f1np1 00:20:33.913 altname ens818f1np1 00:20:33.913 inet 192.168.100.9/24 scope global mlx_0_1 00:20:33.913 valid_lft forever preferred_lft forever 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:33.913 192.168.100.9' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:33.913 192.168.100.9' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # head -n 1 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:33.913 192.168.100.9' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # tail -n +2 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # head -n 1 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3345245 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3345245 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3345245 ']' 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:33.913 15:57:19 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:33.913 [2024-11-17 15:57:19.395467] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:33.913 [2024-11-17 15:57:19.395560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.171 [2024-11-17 15:57:19.531045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.171 [2024-11-17 15:57:19.630806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.171 [2024-11-17 15:57:19.630853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.171 [2024-11-17 15:57:19.630865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.171 [2024-11-17 15:57:19.630880] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.171 [2024-11-17 15:57:19.630889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.171 [2024-11-17 15:57:19.633220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.171 [2024-11-17 15:57:19.633296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.171 [2024-11-17 15:57:19.633353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.171 [2024-11-17 15:57:19.633362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.737 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.737 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:20:34.737 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.737 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:34.737 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:34.737 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.737 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:34.737 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.737 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:34.737 [2024-11-17 15:57:20.272600] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f9d5a792940) succeed. 00:20:34.995 [2024-11-17 15:57:20.281976] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f9d5a74e940) succeed. 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 Malloc0 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 Malloc1 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 [2024-11-17 15:57:20.748153] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.252 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:20:35.509 00:20:35.509 Discovery Log Number of Records 2, Generation counter 2 00:20:35.509 =====Discovery Log Entry 0====== 00:20:35.509 trtype: rdma 00:20:35.509 adrfam: ipv4 00:20:35.509 subtype: current discovery subsystem 00:20:35.509 treq: not required 00:20:35.509 portid: 0 00:20:35.509 trsvcid: 4420 00:20:35.509 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:35.509 traddr: 192.168.100.8 00:20:35.509 eflags: explicit discovery connections, duplicate discovery information 00:20:35.509 rdma_prtype: not specified 00:20:35.509 rdma_qptype: connected 00:20:35.509 rdma_cms: rdma-cm 00:20:35.509 rdma_pkey: 0x0000 00:20:35.509 =====Discovery Log Entry 1====== 00:20:35.509 trtype: rdma 00:20:35.509 adrfam: ipv4 00:20:35.509 subtype: nvme subsystem 00:20:35.509 treq: not required 00:20:35.509 portid: 0 00:20:35.509 trsvcid: 4420 00:20:35.509 subnqn: nqn.2016-06.io.spdk:cnode1 00:20:35.509 traddr: 192.168.100.8 00:20:35.509 eflags: none 00:20:35.509 rdma_prtype: not specified 00:20:35.509 rdma_qptype: connected 00:20:35.509 rdma_cms: rdma-cm 00:20:35.509 rdma_pkey: 0x0000 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:20:35.509 15:57:20 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:36.442 15:57:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:20:36.442 15:57:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:20:36.442 15:57:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:36.442 15:57:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:20:36.442 15:57:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:20:36.442 15:57:21 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:20:38.340 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:38.340 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:38.340 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:20:38.598 /dev/nvme0n2 ]] 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:20:38.598 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.599 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:20:38.599 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:20:38.599 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.599 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:20:38.599 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:20:38.599 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:20:38.599 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:20:38.599 15:57:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:39.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.554 15:57:24 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:39.554 rmmod nvme_rdma 00:20:39.554 rmmod nvme_fabrics 00:20:39.554 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.554 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:20:39.554 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:20:39.554 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3345245 ']' 00:20:39.554 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3345245 00:20:39.554 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3345245 ']' 00:20:39.554 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3345245 00:20:39.554 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:20:39.554 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.554 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3345245 00:20:39.813 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.813 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.813 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3345245' 00:20:39.813 killing process with pid 3345245 00:20:39.813 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3345245 00:20:39.813 15:57:25 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3345245 00:20:41.714 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.714 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:20:41.714 00:20:41.714 real 0m14.733s 00:20:41.714 user 0m30.004s 00:20:41.714 sys 0m5.998s 00:20:41.714 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.714 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:20:41.714 ************************************ 00:20:41.714 END TEST nvmf_nvme_cli 00:20:41.714 ************************************ 00:20:41.714 15:57:27 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:20:41.714 15:57:27 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:20:41.714 15:57:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.714 15:57:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.714 15:57:27 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.714 ************************************ 00:20:41.714 START TEST nvmf_auth_target 00:20:41.714 ************************************ 00:20:41.714 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:20:41.973 * Looking for test storage... 00:20:41.973 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:41.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.973 --rc genhtml_branch_coverage=1 00:20:41.973 --rc genhtml_function_coverage=1 00:20:41.973 --rc genhtml_legend=1 00:20:41.973 --rc geninfo_all_blocks=1 00:20:41.973 --rc geninfo_unexecuted_blocks=1 00:20:41.973 00:20:41.973 ' 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:41.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.973 --rc genhtml_branch_coverage=1 00:20:41.973 --rc genhtml_function_coverage=1 00:20:41.973 --rc genhtml_legend=1 00:20:41.973 --rc geninfo_all_blocks=1 00:20:41.973 --rc geninfo_unexecuted_blocks=1 00:20:41.973 00:20:41.973 ' 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:41.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.973 --rc genhtml_branch_coverage=1 00:20:41.973 --rc genhtml_function_coverage=1 00:20:41.973 --rc genhtml_legend=1 00:20:41.973 --rc geninfo_all_blocks=1 00:20:41.973 --rc geninfo_unexecuted_blocks=1 00:20:41.973 00:20:41.973 ' 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:41.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.973 --rc genhtml_branch_coverage=1 00:20:41.973 --rc genhtml_function_coverage=1 00:20:41.973 --rc genhtml_legend=1 00:20:41.973 --rc geninfo_all_blocks=1 00:20:41.973 --rc geninfo_unexecuted_blocks=1 00:20:41.973 00:20:41.973 ' 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.973 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.974 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:41.974 15:57:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:48.531 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:48.531 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:48.532 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:48.532 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:48.532 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # rdma_device_init 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:48.532 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:48.532 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:48.532 altname enp217s0f0np0 00:20:48.532 altname ens818f0np0 00:20:48.532 inet 192.168.100.8/24 scope global mlx_0_0 00:20:48.532 valid_lft forever preferred_lft forever 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:48.532 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:48.532 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:48.532 altname enp217s0f1np1 00:20:48.532 altname ens818f1np1 00:20:48.532 inet 192.168.100.9/24 scope global mlx_0_1 00:20:48.532 valid_lft forever preferred_lft forever 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:48.532 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:20:48.533 192.168.100.9' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:20:48.533 192.168.100.9' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # head -n 1 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:20:48.533 192.168.100.9' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # tail -n +2 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # head -n 1 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3349789 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3349789 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3349789 ']' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.533 15:57:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3350064 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6eb48c42b45cb0821ee9bef943903aa39d722f94c41bc465 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XyA 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6eb48c42b45cb0821ee9bef943903aa39d722f94c41bc465 0 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6eb48c42b45cb0821ee9bef943903aa39d722f94c41bc465 0 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6eb48c42b45cb0821ee9bef943903aa39d722f94c41bc465 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XyA 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XyA 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.XyA 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2dc1f10b178662eabd6e5ff69272be712daef7c04b07d4e3a1ec6d973bc299d6 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tWj 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2dc1f10b178662eabd6e5ff69272be712daef7c04b07d4e3a1ec6d973bc299d6 3 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2dc1f10b178662eabd6e5ff69272be712daef7c04b07d4e3a1ec6d973bc299d6 3 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2dc1f10b178662eabd6e5ff69272be712daef7c04b07d4e3a1ec6d973bc299d6 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tWj 00:20:49.098 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tWj 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.tWj 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=40234d97909c7ddfab5085ea64c5b8dc 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5ZD 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 40234d97909c7ddfab5085ea64c5b8dc 1 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 40234d97909c7ddfab5085ea64c5b8dc 1 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.358 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=40234d97909c7ddfab5085ea64c5b8dc 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5ZD 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5ZD 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.5ZD 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6df68d69ce902849eeda42ff0aace19d2a4f8340a8d28d9d 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Fqr 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6df68d69ce902849eeda42ff0aace19d2a4f8340a8d28d9d 2 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6df68d69ce902849eeda42ff0aace19d2a4f8340a8d28d9d 2 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6df68d69ce902849eeda42ff0aace19d2a4f8340a8d28d9d 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Fqr 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Fqr 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Fqr 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c3c9f6f28b49bc3fe70a4e2e11f8d0a23128e12a89072944 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OI0 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c3c9f6f28b49bc3fe70a4e2e11f8d0a23128e12a89072944 2 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c3c9f6f28b49bc3fe70a4e2e11f8d0a23128e12a89072944 2 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c3c9f6f28b49bc3fe70a4e2e11f8d0a23128e12a89072944 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OI0 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OI0 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.OI0 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6ca47ca7f0c707ee8b7306b381883dc1 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6kw 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6ca47ca7f0c707ee8b7306b381883dc1 1 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6ca47ca7f0c707ee8b7306b381883dc1 1 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6ca47ca7f0c707ee8b7306b381883dc1 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6kw 00:20:49.359 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6kw 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.6kw 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4a815593548ac145d793d9d65f5b722cf2c0fd650aeb7700172ac3dda22da1fb 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8d2 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4a815593548ac145d793d9d65f5b722cf2c0fd650aeb7700172ac3dda22da1fb 3 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4a815593548ac145d793d9d65f5b722cf2c0fd650aeb7700172ac3dda22da1fb 3 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4a815593548ac145d793d9d65f5b722cf2c0fd650aeb7700172ac3dda22da1fb 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8d2 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8d2 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.8d2 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3349789 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3349789 ']' 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.618 15:57:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.618 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:49.618 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:49.618 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3350064 /var/tmp/host.sock 00:20:49.618 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3350064 ']' 00:20:49.618 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:49.618 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.618 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:49.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:49.618 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.618 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XyA 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.XyA 00:20:50.184 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.XyA 00:20:50.441 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.tWj ]] 00:20:50.441 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tWj 00:20:50.441 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.441 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.441 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.441 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tWj 00:20:50.441 15:57:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tWj 00:20:50.748 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:50.748 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5ZD 00:20:50.748 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.748 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.748 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.748 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.5ZD 00:20:50.748 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.5ZD 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Fqr ]] 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fqr 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fqr 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fqr 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OI0 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.OI0 00:20:51.047 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.OI0 00:20:51.305 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.6kw ]] 00:20:51.305 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6kw 00:20:51.305 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.305 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.305 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.305 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6kw 00:20:51.305 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6kw 00:20:51.563 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:51.563 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8d2 00:20:51.563 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.563 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.563 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.563 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8d2 00:20:51.563 15:57:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8d2 00:20:51.563 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:51.563 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:51.563 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.563 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.563 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:51.563 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.820 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.078 00:20:52.078 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.078 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.078 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.336 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.336 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.336 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.336 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.336 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.336 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.336 { 00:20:52.336 "cntlid": 1, 00:20:52.336 "qid": 0, 00:20:52.336 "state": "enabled", 00:20:52.336 "thread": "nvmf_tgt_poll_group_000", 00:20:52.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:52.336 "listen_address": { 00:20:52.336 "trtype": "RDMA", 00:20:52.336 "adrfam": "IPv4", 00:20:52.336 "traddr": "192.168.100.8", 00:20:52.336 "trsvcid": "4420" 00:20:52.336 }, 00:20:52.336 "peer_address": { 00:20:52.336 "trtype": "RDMA", 00:20:52.336 "adrfam": "IPv4", 00:20:52.336 "traddr": "192.168.100.8", 00:20:52.336 "trsvcid": "38375" 00:20:52.336 }, 00:20:52.336 "auth": { 00:20:52.336 "state": "completed", 00:20:52.336 "digest": "sha256", 00:20:52.336 "dhgroup": "null" 00:20:52.336 } 00:20:52.336 } 00:20:52.336 ]' 00:20:52.336 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.336 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.336 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.594 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:52.594 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.594 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.594 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.594 15:57:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.852 15:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:20:52.852 15:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:20:53.418 15:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.418 15:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:53.418 15:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.418 15:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.418 15:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.418 15:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.418 15:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:53.418 15:57:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.676 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.933 00:20:53.933 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.933 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.933 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.191 { 00:20:54.191 "cntlid": 3, 00:20:54.191 "qid": 0, 00:20:54.191 "state": "enabled", 00:20:54.191 "thread": "nvmf_tgt_poll_group_000", 00:20:54.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:54.191 "listen_address": { 00:20:54.191 "trtype": "RDMA", 00:20:54.191 "adrfam": "IPv4", 00:20:54.191 "traddr": "192.168.100.8", 00:20:54.191 "trsvcid": "4420" 00:20:54.191 }, 00:20:54.191 "peer_address": { 00:20:54.191 "trtype": "RDMA", 00:20:54.191 "adrfam": "IPv4", 00:20:54.191 "traddr": "192.168.100.8", 00:20:54.191 "trsvcid": "57910" 00:20:54.191 }, 00:20:54.191 "auth": { 00:20:54.191 "state": "completed", 00:20:54.191 "digest": "sha256", 00:20:54.191 "dhgroup": "null" 00:20:54.191 } 00:20:54.191 } 00:20:54.191 ]' 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.191 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.449 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:20:54.449 15:57:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:20:55.016 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.275 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:55.275 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.275 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.275 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.275 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.275 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:55.275 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:55.533 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:55.533 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.533 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:55.533 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.533 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.533 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.533 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.533 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.533 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.534 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.534 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.534 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.534 15:57:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.792 00:20:55.792 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.792 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.792 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.792 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.792 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.792 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.792 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.792 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.792 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.792 { 00:20:55.792 "cntlid": 5, 00:20:55.792 "qid": 0, 00:20:55.792 "state": "enabled", 00:20:55.792 "thread": "nvmf_tgt_poll_group_000", 00:20:55.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:55.792 "listen_address": { 00:20:55.792 "trtype": "RDMA", 00:20:55.792 "adrfam": "IPv4", 00:20:55.792 "traddr": "192.168.100.8", 00:20:55.792 "trsvcid": "4420" 00:20:55.792 }, 00:20:55.792 "peer_address": { 00:20:55.792 "trtype": "RDMA", 00:20:55.792 "adrfam": "IPv4", 00:20:55.792 "traddr": "192.168.100.8", 00:20:55.792 "trsvcid": "44562" 00:20:55.792 }, 00:20:55.792 "auth": { 00:20:55.792 "state": "completed", 00:20:55.792 "digest": "sha256", 00:20:55.792 "dhgroup": "null" 00:20:55.792 } 00:20:55.792 } 00:20:55.792 ]' 00:20:55.792 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.051 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:56.051 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.051 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.051 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.051 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.051 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.051 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.310 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:20:56.310 15:57:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:20:56.876 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.876 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:56.876 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.876 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.876 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.876 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.876 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:56.876 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.135 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.392 00:20:57.392 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.392 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.392 15:57:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.650 { 00:20:57.650 "cntlid": 7, 00:20:57.650 "qid": 0, 00:20:57.650 "state": "enabled", 00:20:57.650 "thread": "nvmf_tgt_poll_group_000", 00:20:57.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:57.650 "listen_address": { 00:20:57.650 "trtype": "RDMA", 00:20:57.650 "adrfam": "IPv4", 00:20:57.650 "traddr": "192.168.100.8", 00:20:57.650 "trsvcid": "4420" 00:20:57.650 }, 00:20:57.650 "peer_address": { 00:20:57.650 "trtype": "RDMA", 00:20:57.650 "adrfam": "IPv4", 00:20:57.650 "traddr": "192.168.100.8", 00:20:57.650 "trsvcid": "46913" 00:20:57.650 }, 00:20:57.650 "auth": { 00:20:57.650 "state": "completed", 00:20:57.650 "digest": "sha256", 00:20:57.650 "dhgroup": "null" 00:20:57.650 } 00:20:57.650 } 00:20:57.650 ]' 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.650 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.908 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:20:57.909 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:20:58.474 15:57:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.731 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:58.731 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.731 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.731 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.731 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.731 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.731 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:58.731 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.988 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.245 00:20:59.245 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.245 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.245 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.245 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.245 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.245 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.245 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.245 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.245 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.245 { 00:20:59.245 "cntlid": 9, 00:20:59.245 "qid": 0, 00:20:59.245 "state": "enabled", 00:20:59.245 "thread": "nvmf_tgt_poll_group_000", 00:20:59.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:20:59.245 "listen_address": { 00:20:59.245 "trtype": "RDMA", 00:20:59.245 "adrfam": "IPv4", 00:20:59.245 "traddr": "192.168.100.8", 00:20:59.245 "trsvcid": "4420" 00:20:59.245 }, 00:20:59.245 "peer_address": { 00:20:59.245 "trtype": "RDMA", 00:20:59.245 "adrfam": "IPv4", 00:20:59.245 "traddr": "192.168.100.8", 00:20:59.245 "trsvcid": "39889" 00:20:59.245 }, 00:20:59.245 "auth": { 00:20:59.245 "state": "completed", 00:20:59.245 "digest": "sha256", 00:20:59.245 "dhgroup": "ffdhe2048" 00:20:59.245 } 00:20:59.245 } 00:20:59.245 ]' 00:20:59.245 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.503 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.503 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.503 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.503 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.503 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.503 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.503 15:57:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.761 15:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:20:59.761 15:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:00.327 15:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.327 15:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:00.327 15:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.327 15:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.585 15:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.585 15:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.585 15:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:00.585 15:57:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.585 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.843 00:21:00.843 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.843 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.843 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.101 { 00:21:01.101 "cntlid": 11, 00:21:01.101 "qid": 0, 00:21:01.101 "state": "enabled", 00:21:01.101 "thread": "nvmf_tgt_poll_group_000", 00:21:01.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:01.101 "listen_address": { 00:21:01.101 "trtype": "RDMA", 00:21:01.101 "adrfam": "IPv4", 00:21:01.101 "traddr": "192.168.100.8", 00:21:01.101 "trsvcid": "4420" 00:21:01.101 }, 00:21:01.101 "peer_address": { 00:21:01.101 "trtype": "RDMA", 00:21:01.101 "adrfam": "IPv4", 00:21:01.101 "traddr": "192.168.100.8", 00:21:01.101 "trsvcid": "45629" 00:21:01.101 }, 00:21:01.101 "auth": { 00:21:01.101 "state": "completed", 00:21:01.101 "digest": "sha256", 00:21:01.101 "dhgroup": "ffdhe2048" 00:21:01.101 } 00:21:01.101 } 00:21:01.101 ]' 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:01.101 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.359 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.359 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.359 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.359 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:01.359 15:57:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.293 15:57:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.550 00:21:02.551 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.551 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.551 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.808 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.808 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.808 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.808 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.808 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.808 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.808 { 00:21:02.808 "cntlid": 13, 00:21:02.809 "qid": 0, 00:21:02.809 "state": "enabled", 00:21:02.809 "thread": "nvmf_tgt_poll_group_000", 00:21:02.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:02.809 "listen_address": { 00:21:02.809 "trtype": "RDMA", 00:21:02.809 "adrfam": "IPv4", 00:21:02.809 "traddr": "192.168.100.8", 00:21:02.809 "trsvcid": "4420" 00:21:02.809 }, 00:21:02.809 "peer_address": { 00:21:02.809 "trtype": "RDMA", 00:21:02.809 "adrfam": "IPv4", 00:21:02.809 "traddr": "192.168.100.8", 00:21:02.809 "trsvcid": "41566" 00:21:02.809 }, 00:21:02.809 "auth": { 00:21:02.809 "state": "completed", 00:21:02.809 "digest": "sha256", 00:21:02.809 "dhgroup": "ffdhe2048" 00:21:02.809 } 00:21:02.809 } 00:21:02.809 ]' 00:21:02.809 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.066 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:03.066 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.066 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.066 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.066 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.066 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.066 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.324 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:03.324 15:57:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:03.889 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.889 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:03.889 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.889 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.889 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.889 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.889 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:03.889 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:04.146 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:04.146 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.146 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:04.146 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:04.146 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.146 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.146 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:04.147 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.147 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.147 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.147 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.147 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.147 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.404 00:21:04.404 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.404 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.404 15:57:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.662 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.662 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.662 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.662 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.662 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.662 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.662 { 00:21:04.662 "cntlid": 15, 00:21:04.662 "qid": 0, 00:21:04.662 "state": "enabled", 00:21:04.662 "thread": "nvmf_tgt_poll_group_000", 00:21:04.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:04.662 "listen_address": { 00:21:04.662 "trtype": "RDMA", 00:21:04.662 "adrfam": "IPv4", 00:21:04.662 "traddr": "192.168.100.8", 00:21:04.662 "trsvcid": "4420" 00:21:04.662 }, 00:21:04.662 "peer_address": { 00:21:04.662 "trtype": "RDMA", 00:21:04.662 "adrfam": "IPv4", 00:21:04.662 "traddr": "192.168.100.8", 00:21:04.662 "trsvcid": "40306" 00:21:04.662 }, 00:21:04.662 "auth": { 00:21:04.662 "state": "completed", 00:21:04.662 "digest": "sha256", 00:21:04.662 "dhgroup": "ffdhe2048" 00:21:04.662 } 00:21:04.662 } 00:21:04.662 ]' 00:21:04.662 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.662 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.662 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.662 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.663 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.663 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.663 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.663 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.920 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:04.920 15:57:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:05.526 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.783 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:05.783 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.783 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.783 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.783 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.783 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.783 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:05.783 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:05.783 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:05.783 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.040 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:06.040 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.040 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.040 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.041 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.041 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.041 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.041 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.041 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.041 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.041 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.298 00:21:06.298 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.298 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.298 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.298 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.298 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.298 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.298 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.298 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.298 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.298 { 00:21:06.298 "cntlid": 17, 00:21:06.298 "qid": 0, 00:21:06.298 "state": "enabled", 00:21:06.298 "thread": "nvmf_tgt_poll_group_000", 00:21:06.298 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:06.298 "listen_address": { 00:21:06.298 "trtype": "RDMA", 00:21:06.298 "adrfam": "IPv4", 00:21:06.298 "traddr": "192.168.100.8", 00:21:06.298 "trsvcid": "4420" 00:21:06.298 }, 00:21:06.298 "peer_address": { 00:21:06.298 "trtype": "RDMA", 00:21:06.298 "adrfam": "IPv4", 00:21:06.298 "traddr": "192.168.100.8", 00:21:06.298 "trsvcid": "55924" 00:21:06.298 }, 00:21:06.298 "auth": { 00:21:06.298 "state": "completed", 00:21:06.298 "digest": "sha256", 00:21:06.298 "dhgroup": "ffdhe3072" 00:21:06.298 } 00:21:06.298 } 00:21:06.298 ]' 00:21:06.298 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.555 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:06.555 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.555 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:06.555 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.555 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.555 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.555 15:57:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.812 15:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:06.812 15:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:07.375 15:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.375 15:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:07.375 15:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.375 15:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.375 15:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.375 15:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.375 15:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:07.375 15:57:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.630 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.885 00:21:07.886 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.886 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.886 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.142 { 00:21:08.142 "cntlid": 19, 00:21:08.142 "qid": 0, 00:21:08.142 "state": "enabled", 00:21:08.142 "thread": "nvmf_tgt_poll_group_000", 00:21:08.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:08.142 "listen_address": { 00:21:08.142 "trtype": "RDMA", 00:21:08.142 "adrfam": "IPv4", 00:21:08.142 "traddr": "192.168.100.8", 00:21:08.142 "trsvcid": "4420" 00:21:08.142 }, 00:21:08.142 "peer_address": { 00:21:08.142 "trtype": "RDMA", 00:21:08.142 "adrfam": "IPv4", 00:21:08.142 "traddr": "192.168.100.8", 00:21:08.142 "trsvcid": "58742" 00:21:08.142 }, 00:21:08.142 "auth": { 00:21:08.142 "state": "completed", 00:21:08.142 "digest": "sha256", 00:21:08.142 "dhgroup": "ffdhe3072" 00:21:08.142 } 00:21:08.142 } 00:21:08.142 ]' 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.142 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.399 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:08.399 15:57:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:09.330 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.330 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.331 15:57:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.588 00:21:09.588 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.588 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.588 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.845 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.845 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.845 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.845 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.845 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.845 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.845 { 00:21:09.845 "cntlid": 21, 00:21:09.845 "qid": 0, 00:21:09.845 "state": "enabled", 00:21:09.845 "thread": "nvmf_tgt_poll_group_000", 00:21:09.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:09.845 "listen_address": { 00:21:09.845 "trtype": "RDMA", 00:21:09.845 "adrfam": "IPv4", 00:21:09.845 "traddr": "192.168.100.8", 00:21:09.845 "trsvcid": "4420" 00:21:09.845 }, 00:21:09.845 "peer_address": { 00:21:09.845 "trtype": "RDMA", 00:21:09.845 "adrfam": "IPv4", 00:21:09.845 "traddr": "192.168.100.8", 00:21:09.845 "trsvcid": "56360" 00:21:09.845 }, 00:21:09.845 "auth": { 00:21:09.845 "state": "completed", 00:21:09.845 "digest": "sha256", 00:21:09.845 "dhgroup": "ffdhe3072" 00:21:09.845 } 00:21:09.845 } 00:21:09.845 ]' 00:21:09.845 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.845 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:09.845 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.102 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.102 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.102 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.102 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.102 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.102 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:10.102 15:57:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.032 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.289 00:21:11.547 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.547 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.547 15:57:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.547 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.547 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.547 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.547 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.547 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.547 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.547 { 00:21:11.547 "cntlid": 23, 00:21:11.547 "qid": 0, 00:21:11.547 "state": "enabled", 00:21:11.547 "thread": "nvmf_tgt_poll_group_000", 00:21:11.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:11.547 "listen_address": { 00:21:11.547 "trtype": "RDMA", 00:21:11.547 "adrfam": "IPv4", 00:21:11.547 "traddr": "192.168.100.8", 00:21:11.547 "trsvcid": "4420" 00:21:11.547 }, 00:21:11.547 "peer_address": { 00:21:11.547 "trtype": "RDMA", 00:21:11.547 "adrfam": "IPv4", 00:21:11.547 "traddr": "192.168.100.8", 00:21:11.547 "trsvcid": "60502" 00:21:11.547 }, 00:21:11.547 "auth": { 00:21:11.547 "state": "completed", 00:21:11.547 "digest": "sha256", 00:21:11.547 "dhgroup": "ffdhe3072" 00:21:11.547 } 00:21:11.547 } 00:21:11.547 ]' 00:21:11.547 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.806 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:11.806 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.806 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:11.806 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.806 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.806 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.806 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.064 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:12.064 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:12.629 15:57:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.629 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:12.629 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.629 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.629 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.629 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.629 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.629 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:12.629 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.887 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.145 00:21:13.145 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.145 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.145 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.402 { 00:21:13.402 "cntlid": 25, 00:21:13.402 "qid": 0, 00:21:13.402 "state": "enabled", 00:21:13.402 "thread": "nvmf_tgt_poll_group_000", 00:21:13.402 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:13.402 "listen_address": { 00:21:13.402 "trtype": "RDMA", 00:21:13.402 "adrfam": "IPv4", 00:21:13.402 "traddr": "192.168.100.8", 00:21:13.402 "trsvcid": "4420" 00:21:13.402 }, 00:21:13.402 "peer_address": { 00:21:13.402 "trtype": "RDMA", 00:21:13.402 "adrfam": "IPv4", 00:21:13.402 "traddr": "192.168.100.8", 00:21:13.402 "trsvcid": "51234" 00:21:13.402 }, 00:21:13.402 "auth": { 00:21:13.402 "state": "completed", 00:21:13.402 "digest": "sha256", 00:21:13.402 "dhgroup": "ffdhe4096" 00:21:13.402 } 00:21:13.402 } 00:21:13.402 ]' 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.402 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.403 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.403 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.403 15:57:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.661 15:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:13.661 15:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:14.231 15:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.491 15:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:14.491 15:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.491 15:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.491 15:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.491 15:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.491 15:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:14.491 15:57:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.750 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:15.009 00:21:15.009 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.009 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.009 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.267 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.267 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.267 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.267 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.267 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.267 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.267 { 00:21:15.267 "cntlid": 27, 00:21:15.267 "qid": 0, 00:21:15.267 "state": "enabled", 00:21:15.267 "thread": "nvmf_tgt_poll_group_000", 00:21:15.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:15.267 "listen_address": { 00:21:15.267 "trtype": "RDMA", 00:21:15.267 "adrfam": "IPv4", 00:21:15.267 "traddr": "192.168.100.8", 00:21:15.267 "trsvcid": "4420" 00:21:15.267 }, 00:21:15.267 "peer_address": { 00:21:15.267 "trtype": "RDMA", 00:21:15.267 "adrfam": "IPv4", 00:21:15.267 "traddr": "192.168.100.8", 00:21:15.267 "trsvcid": "39014" 00:21:15.267 }, 00:21:15.267 "auth": { 00:21:15.267 "state": "completed", 00:21:15.267 "digest": "sha256", 00:21:15.267 "dhgroup": "ffdhe4096" 00:21:15.267 } 00:21:15.267 } 00:21:15.267 ]' 00:21:15.267 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.268 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:15.268 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.268 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:15.268 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.268 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.268 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.268 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.525 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:15.525 15:58:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:16.091 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.091 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:16.091 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.091 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.091 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.091 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.091 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:16.091 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.349 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.350 15:58:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.608 00:21:16.608 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.608 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.608 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.866 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.866 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.866 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.867 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.867 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.867 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.867 { 00:21:16.867 "cntlid": 29, 00:21:16.867 "qid": 0, 00:21:16.867 "state": "enabled", 00:21:16.867 "thread": "nvmf_tgt_poll_group_000", 00:21:16.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:16.867 "listen_address": { 00:21:16.867 "trtype": "RDMA", 00:21:16.867 "adrfam": "IPv4", 00:21:16.867 "traddr": "192.168.100.8", 00:21:16.867 "trsvcid": "4420" 00:21:16.867 }, 00:21:16.867 "peer_address": { 00:21:16.867 "trtype": "RDMA", 00:21:16.867 "adrfam": "IPv4", 00:21:16.867 "traddr": "192.168.100.8", 00:21:16.867 "trsvcid": "44229" 00:21:16.867 }, 00:21:16.867 "auth": { 00:21:16.867 "state": "completed", 00:21:16.867 "digest": "sha256", 00:21:16.867 "dhgroup": "ffdhe4096" 00:21:16.867 } 00:21:16.867 } 00:21:16.867 ]' 00:21:16.867 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.867 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.867 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.867 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.867 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.125 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.125 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.125 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.125 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:17.126 15:58:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:17.692 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.951 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:17.951 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.951 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.951 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.951 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.951 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:17.951 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.209 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.468 00:21:18.468 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.468 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.468 15:58:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.794 { 00:21:18.794 "cntlid": 31, 00:21:18.794 "qid": 0, 00:21:18.794 "state": "enabled", 00:21:18.794 "thread": "nvmf_tgt_poll_group_000", 00:21:18.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:18.794 "listen_address": { 00:21:18.794 "trtype": "RDMA", 00:21:18.794 "adrfam": "IPv4", 00:21:18.794 "traddr": "192.168.100.8", 00:21:18.794 "trsvcid": "4420" 00:21:18.794 }, 00:21:18.794 "peer_address": { 00:21:18.794 "trtype": "RDMA", 00:21:18.794 "adrfam": "IPv4", 00:21:18.794 "traddr": "192.168.100.8", 00:21:18.794 "trsvcid": "45164" 00:21:18.794 }, 00:21:18.794 "auth": { 00:21:18.794 "state": "completed", 00:21:18.794 "digest": "sha256", 00:21:18.794 "dhgroup": "ffdhe4096" 00:21:18.794 } 00:21:18.794 } 00:21:18.794 ]' 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.794 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.089 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:19.089 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:19.662 15:58:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.662 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:19.662 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.662 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.662 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.662 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.663 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.663 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:19.663 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.921 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.178 00:21:20.178 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.178 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.178 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.435 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.435 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.435 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.435 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.435 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.435 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.435 { 00:21:20.435 "cntlid": 33, 00:21:20.435 "qid": 0, 00:21:20.435 "state": "enabled", 00:21:20.435 "thread": "nvmf_tgt_poll_group_000", 00:21:20.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:20.435 "listen_address": { 00:21:20.435 "trtype": "RDMA", 00:21:20.436 "adrfam": "IPv4", 00:21:20.436 "traddr": "192.168.100.8", 00:21:20.436 "trsvcid": "4420" 00:21:20.436 }, 00:21:20.436 "peer_address": { 00:21:20.436 "trtype": "RDMA", 00:21:20.436 "adrfam": "IPv4", 00:21:20.436 "traddr": "192.168.100.8", 00:21:20.436 "trsvcid": "58408" 00:21:20.436 }, 00:21:20.436 "auth": { 00:21:20.436 "state": "completed", 00:21:20.436 "digest": "sha256", 00:21:20.436 "dhgroup": "ffdhe6144" 00:21:20.436 } 00:21:20.436 } 00:21:20.436 ]' 00:21:20.436 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.436 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:20.436 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.436 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:20.436 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.693 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.694 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.694 15:58:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.694 15:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:20.694 15:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:21.626 15:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.626 15:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:21.626 15:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.626 15:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.626 15:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.626 15:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.626 15:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:21.626 15:58:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.626 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.192 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.192 { 00:21:22.192 "cntlid": 35, 00:21:22.192 "qid": 0, 00:21:22.192 "state": "enabled", 00:21:22.192 "thread": "nvmf_tgt_poll_group_000", 00:21:22.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:22.192 "listen_address": { 00:21:22.192 "trtype": "RDMA", 00:21:22.192 "adrfam": "IPv4", 00:21:22.192 "traddr": "192.168.100.8", 00:21:22.192 "trsvcid": "4420" 00:21:22.192 }, 00:21:22.192 "peer_address": { 00:21:22.192 "trtype": "RDMA", 00:21:22.192 "adrfam": "IPv4", 00:21:22.192 "traddr": "192.168.100.8", 00:21:22.192 "trsvcid": "48054" 00:21:22.192 }, 00:21:22.192 "auth": { 00:21:22.192 "state": "completed", 00:21:22.192 "digest": "sha256", 00:21:22.192 "dhgroup": "ffdhe6144" 00:21:22.192 } 00:21:22.192 } 00:21:22.192 ]' 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.192 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:22.449 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.449 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:22.449 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.449 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.449 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.449 15:58:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.707 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:22.707 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:23.272 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.272 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:23.272 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.272 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.272 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.272 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.272 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:23.272 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.530 15:58:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.788 00:21:23.788 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.788 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.788 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.045 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.045 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.045 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.045 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.045 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.045 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.045 { 00:21:24.045 "cntlid": 37, 00:21:24.045 "qid": 0, 00:21:24.045 "state": "enabled", 00:21:24.045 "thread": "nvmf_tgt_poll_group_000", 00:21:24.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:24.045 "listen_address": { 00:21:24.045 "trtype": "RDMA", 00:21:24.045 "adrfam": "IPv4", 00:21:24.045 "traddr": "192.168.100.8", 00:21:24.045 "trsvcid": "4420" 00:21:24.045 }, 00:21:24.045 "peer_address": { 00:21:24.045 "trtype": "RDMA", 00:21:24.045 "adrfam": "IPv4", 00:21:24.045 "traddr": "192.168.100.8", 00:21:24.045 "trsvcid": "52883" 00:21:24.045 }, 00:21:24.045 "auth": { 00:21:24.045 "state": "completed", 00:21:24.045 "digest": "sha256", 00:21:24.045 "dhgroup": "ffdhe6144" 00:21:24.045 } 00:21:24.045 } 00:21:24.045 ]' 00:21:24.045 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.045 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:24.045 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.303 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:24.303 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.303 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.303 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.303 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.560 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:24.560 15:58:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:25.127 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.127 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:25.127 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.127 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.127 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.127 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.127 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:25.127 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.385 15:58:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:25.643 00:21:25.643 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.643 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.643 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.901 { 00:21:25.901 "cntlid": 39, 00:21:25.901 "qid": 0, 00:21:25.901 "state": "enabled", 00:21:25.901 "thread": "nvmf_tgt_poll_group_000", 00:21:25.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:25.901 "listen_address": { 00:21:25.901 "trtype": "RDMA", 00:21:25.901 "adrfam": "IPv4", 00:21:25.901 "traddr": "192.168.100.8", 00:21:25.901 "trsvcid": "4420" 00:21:25.901 }, 00:21:25.901 "peer_address": { 00:21:25.901 "trtype": "RDMA", 00:21:25.901 "adrfam": "IPv4", 00:21:25.901 "traddr": "192.168.100.8", 00:21:25.901 "trsvcid": "46180" 00:21:25.901 }, 00:21:25.901 "auth": { 00:21:25.901 "state": "completed", 00:21:25.901 "digest": "sha256", 00:21:25.901 "dhgroup": "ffdhe6144" 00:21:25.901 } 00:21:25.901 } 00:21:25.901 ]' 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.901 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.159 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.159 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.159 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.417 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:26.417 15:58:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:26.981 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.981 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:26.981 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.981 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.981 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.981 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.981 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.981 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:26.981 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.240 15:58:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:27.805 00:21:27.805 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.805 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.805 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.805 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.805 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.805 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.805 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.805 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.805 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.805 { 00:21:27.805 "cntlid": 41, 00:21:27.805 "qid": 0, 00:21:27.805 "state": "enabled", 00:21:27.805 "thread": "nvmf_tgt_poll_group_000", 00:21:27.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:27.805 "listen_address": { 00:21:27.805 "trtype": "RDMA", 00:21:27.805 "adrfam": "IPv4", 00:21:27.805 "traddr": "192.168.100.8", 00:21:27.805 "trsvcid": "4420" 00:21:27.805 }, 00:21:27.805 "peer_address": { 00:21:27.805 "trtype": "RDMA", 00:21:27.805 "adrfam": "IPv4", 00:21:27.805 "traddr": "192.168.100.8", 00:21:27.805 "trsvcid": "39561" 00:21:27.805 }, 00:21:27.805 "auth": { 00:21:27.805 "state": "completed", 00:21:27.805 "digest": "sha256", 00:21:27.805 "dhgroup": "ffdhe8192" 00:21:27.805 } 00:21:27.805 } 00:21:27.805 ]' 00:21:27.805 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.063 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:28.063 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.063 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.063 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.063 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.063 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.063 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.320 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:28.320 15:58:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:28.885 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.885 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:28.885 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.885 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.885 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.885 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.885 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:28.885 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.143 15:58:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.709 00:21:29.709 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.709 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.709 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.967 { 00:21:29.967 "cntlid": 43, 00:21:29.967 "qid": 0, 00:21:29.967 "state": "enabled", 00:21:29.967 "thread": "nvmf_tgt_poll_group_000", 00:21:29.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:29.967 "listen_address": { 00:21:29.967 "trtype": "RDMA", 00:21:29.967 "adrfam": "IPv4", 00:21:29.967 "traddr": "192.168.100.8", 00:21:29.967 "trsvcid": "4420" 00:21:29.967 }, 00:21:29.967 "peer_address": { 00:21:29.967 "trtype": "RDMA", 00:21:29.967 "adrfam": "IPv4", 00:21:29.967 "traddr": "192.168.100.8", 00:21:29.967 "trsvcid": "33163" 00:21:29.967 }, 00:21:29.967 "auth": { 00:21:29.967 "state": "completed", 00:21:29.967 "digest": "sha256", 00:21:29.967 "dhgroup": "ffdhe8192" 00:21:29.967 } 00:21:29.967 } 00:21:29.967 ]' 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.967 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.225 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:30.225 15:58:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:30.791 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.791 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:30.791 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.791 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.791 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.791 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.791 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:30.791 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:31.048 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.049 15:58:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.612 00:21:31.612 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.612 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.612 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.870 { 00:21:31.870 "cntlid": 45, 00:21:31.870 "qid": 0, 00:21:31.870 "state": "enabled", 00:21:31.870 "thread": "nvmf_tgt_poll_group_000", 00:21:31.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:31.870 "listen_address": { 00:21:31.870 "trtype": "RDMA", 00:21:31.870 "adrfam": "IPv4", 00:21:31.870 "traddr": "192.168.100.8", 00:21:31.870 "trsvcid": "4420" 00:21:31.870 }, 00:21:31.870 "peer_address": { 00:21:31.870 "trtype": "RDMA", 00:21:31.870 "adrfam": "IPv4", 00:21:31.870 "traddr": "192.168.100.8", 00:21:31.870 "trsvcid": "49315" 00:21:31.870 }, 00:21:31.870 "auth": { 00:21:31.870 "state": "completed", 00:21:31.870 "digest": "sha256", 00:21:31.870 "dhgroup": "ffdhe8192" 00:21:31.870 } 00:21:31.870 } 00:21:31.870 ]' 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.870 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.127 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:32.127 15:58:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:32.691 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:32.948 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:33.513 00:21:33.513 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.513 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.513 15:58:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.780 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.781 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.781 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.781 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.781 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.781 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.781 { 00:21:33.781 "cntlid": 47, 00:21:33.781 "qid": 0, 00:21:33.781 "state": "enabled", 00:21:33.781 "thread": "nvmf_tgt_poll_group_000", 00:21:33.781 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:33.781 "listen_address": { 00:21:33.781 "trtype": "RDMA", 00:21:33.781 "adrfam": "IPv4", 00:21:33.781 "traddr": "192.168.100.8", 00:21:33.781 "trsvcid": "4420" 00:21:33.781 }, 00:21:33.781 "peer_address": { 00:21:33.781 "trtype": "RDMA", 00:21:33.781 "adrfam": "IPv4", 00:21:33.781 "traddr": "192.168.100.8", 00:21:33.781 "trsvcid": "33062" 00:21:33.781 }, 00:21:33.781 "auth": { 00:21:33.781 "state": "completed", 00:21:33.781 "digest": "sha256", 00:21:33.781 "dhgroup": "ffdhe8192" 00:21:33.781 } 00:21:33.781 } 00:21:33.781 ]' 00:21:33.781 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.781 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:33.781 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.781 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:33.781 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.782 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.782 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.782 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.045 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:34.045 15:58:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:34.611 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.870 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:34.870 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.870 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.870 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.870 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:34.870 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.870 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.870 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:34.870 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.129 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:35.129 00:21:35.387 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.387 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.387 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.387 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.387 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.387 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.387 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.387 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.387 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.387 { 00:21:35.387 "cntlid": 49, 00:21:35.387 "qid": 0, 00:21:35.387 "state": "enabled", 00:21:35.387 "thread": "nvmf_tgt_poll_group_000", 00:21:35.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:35.387 "listen_address": { 00:21:35.387 "trtype": "RDMA", 00:21:35.387 "adrfam": "IPv4", 00:21:35.387 "traddr": "192.168.100.8", 00:21:35.387 "trsvcid": "4420" 00:21:35.387 }, 00:21:35.387 "peer_address": { 00:21:35.387 "trtype": "RDMA", 00:21:35.387 "adrfam": "IPv4", 00:21:35.387 "traddr": "192.168.100.8", 00:21:35.387 "trsvcid": "56683" 00:21:35.387 }, 00:21:35.387 "auth": { 00:21:35.387 "state": "completed", 00:21:35.387 "digest": "sha384", 00:21:35.387 "dhgroup": "null" 00:21:35.387 } 00:21:35.387 } 00:21:35.387 ]' 00:21:35.387 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.645 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:35.645 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.645 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:35.645 15:58:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.645 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.646 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.646 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.904 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:35.904 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:36.470 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.470 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:36.470 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.470 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.470 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.470 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.470 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:36.470 15:58:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.728 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.987 00:21:36.987 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.987 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.987 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.245 { 00:21:37.245 "cntlid": 51, 00:21:37.245 "qid": 0, 00:21:37.245 "state": "enabled", 00:21:37.245 "thread": "nvmf_tgt_poll_group_000", 00:21:37.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:37.245 "listen_address": { 00:21:37.245 "trtype": "RDMA", 00:21:37.245 "adrfam": "IPv4", 00:21:37.245 "traddr": "192.168.100.8", 00:21:37.245 "trsvcid": "4420" 00:21:37.245 }, 00:21:37.245 "peer_address": { 00:21:37.245 "trtype": "RDMA", 00:21:37.245 "adrfam": "IPv4", 00:21:37.245 "traddr": "192.168.100.8", 00:21:37.245 "trsvcid": "46827" 00:21:37.245 }, 00:21:37.245 "auth": { 00:21:37.245 "state": "completed", 00:21:37.245 "digest": "sha384", 00:21:37.245 "dhgroup": "null" 00:21:37.245 } 00:21:37.245 } 00:21:37.245 ]' 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.245 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.504 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:37.504 15:58:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:38.070 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.328 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:38.328 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.328 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.328 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.328 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.328 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:38.328 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.587 15:58:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.845 00:21:38.845 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.845 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.845 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.845 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.845 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.845 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.845 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.845 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.845 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.845 { 00:21:38.845 "cntlid": 53, 00:21:38.845 "qid": 0, 00:21:38.845 "state": "enabled", 00:21:38.845 "thread": "nvmf_tgt_poll_group_000", 00:21:38.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:38.845 "listen_address": { 00:21:38.845 "trtype": "RDMA", 00:21:38.845 "adrfam": "IPv4", 00:21:38.845 "traddr": "192.168.100.8", 00:21:38.845 "trsvcid": "4420" 00:21:38.845 }, 00:21:38.845 "peer_address": { 00:21:38.845 "trtype": "RDMA", 00:21:38.845 "adrfam": "IPv4", 00:21:38.845 "traddr": "192.168.100.8", 00:21:38.845 "trsvcid": "39558" 00:21:38.845 }, 00:21:38.845 "auth": { 00:21:38.845 "state": "completed", 00:21:38.845 "digest": "sha384", 00:21:38.845 "dhgroup": "null" 00:21:38.845 } 00:21:38.845 } 00:21:38.845 ]' 00:21:38.845 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.104 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:39.104 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.104 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:39.104 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.104 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.104 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.104 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.363 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:39.363 15:58:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:39.930 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.930 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:39.930 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.930 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.930 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.930 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.930 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:39.930 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.189 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.447 00:21:40.447 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.447 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.447 15:58:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.759 { 00:21:40.759 "cntlid": 55, 00:21:40.759 "qid": 0, 00:21:40.759 "state": "enabled", 00:21:40.759 "thread": "nvmf_tgt_poll_group_000", 00:21:40.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:40.759 "listen_address": { 00:21:40.759 "trtype": "RDMA", 00:21:40.759 "adrfam": "IPv4", 00:21:40.759 "traddr": "192.168.100.8", 00:21:40.759 "trsvcid": "4420" 00:21:40.759 }, 00:21:40.759 "peer_address": { 00:21:40.759 "trtype": "RDMA", 00:21:40.759 "adrfam": "IPv4", 00:21:40.759 "traddr": "192.168.100.8", 00:21:40.759 "trsvcid": "47638" 00:21:40.759 }, 00:21:40.759 "auth": { 00:21:40.759 "state": "completed", 00:21:40.759 "digest": "sha384", 00:21:40.759 "dhgroup": "null" 00:21:40.759 } 00:21:40.759 } 00:21:40.759 ]' 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.759 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.042 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:41.042 15:58:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:41.609 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:41.868 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.869 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.869 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.869 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.869 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.869 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.869 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.869 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.127 00:21:42.127 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.127 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.127 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.386 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.386 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.386 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.386 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.386 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.386 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.386 { 00:21:42.386 "cntlid": 57, 00:21:42.386 "qid": 0, 00:21:42.386 "state": "enabled", 00:21:42.386 "thread": "nvmf_tgt_poll_group_000", 00:21:42.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:42.386 "listen_address": { 00:21:42.386 "trtype": "RDMA", 00:21:42.386 "adrfam": "IPv4", 00:21:42.386 "traddr": "192.168.100.8", 00:21:42.386 "trsvcid": "4420" 00:21:42.386 }, 00:21:42.386 "peer_address": { 00:21:42.386 "trtype": "RDMA", 00:21:42.386 "adrfam": "IPv4", 00:21:42.386 "traddr": "192.168.100.8", 00:21:42.386 "trsvcid": "43913" 00:21:42.386 }, 00:21:42.386 "auth": { 00:21:42.386 "state": "completed", 00:21:42.386 "digest": "sha384", 00:21:42.386 "dhgroup": "ffdhe2048" 00:21:42.386 } 00:21:42.386 } 00:21:42.386 ]' 00:21:42.386 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.386 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:42.386 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.644 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.644 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.645 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.645 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.645 15:58:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.645 15:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:42.645 15:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:43.579 15:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.579 15:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:43.579 15:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.579 15:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.579 15:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.579 15:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.579 15:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:43.579 15:58:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.579 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:43.842 00:21:43.842 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.842 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.842 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.100 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.100 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.100 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.100 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.100 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.100 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.100 { 00:21:44.100 "cntlid": 59, 00:21:44.100 "qid": 0, 00:21:44.100 "state": "enabled", 00:21:44.100 "thread": "nvmf_tgt_poll_group_000", 00:21:44.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:44.100 "listen_address": { 00:21:44.100 "trtype": "RDMA", 00:21:44.100 "adrfam": "IPv4", 00:21:44.100 "traddr": "192.168.100.8", 00:21:44.100 "trsvcid": "4420" 00:21:44.100 }, 00:21:44.100 "peer_address": { 00:21:44.100 "trtype": "RDMA", 00:21:44.100 "adrfam": "IPv4", 00:21:44.100 "traddr": "192.168.100.8", 00:21:44.100 "trsvcid": "34078" 00:21:44.100 }, 00:21:44.100 "auth": { 00:21:44.100 "state": "completed", 00:21:44.100 "digest": "sha384", 00:21:44.100 "dhgroup": "ffdhe2048" 00:21:44.100 } 00:21:44.100 } 00:21:44.100 ]' 00:21:44.100 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.100 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:44.100 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.100 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.359 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.359 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.359 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.359 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.359 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:44.359 15:58:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.294 15:58:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.552 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.810 { 00:21:45.810 "cntlid": 61, 00:21:45.810 "qid": 0, 00:21:45.810 "state": "enabled", 00:21:45.810 "thread": "nvmf_tgt_poll_group_000", 00:21:45.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:45.810 "listen_address": { 00:21:45.810 "trtype": "RDMA", 00:21:45.810 "adrfam": "IPv4", 00:21:45.810 "traddr": "192.168.100.8", 00:21:45.810 "trsvcid": "4420" 00:21:45.810 }, 00:21:45.810 "peer_address": { 00:21:45.810 "trtype": "RDMA", 00:21:45.810 "adrfam": "IPv4", 00:21:45.810 "traddr": "192.168.100.8", 00:21:45.810 "trsvcid": "59562" 00:21:45.810 }, 00:21:45.810 "auth": { 00:21:45.810 "state": "completed", 00:21:45.810 "digest": "sha384", 00:21:45.810 "dhgroup": "ffdhe2048" 00:21:45.810 } 00:21:45.810 } 00:21:45.810 ]' 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.810 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.068 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.068 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.068 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.068 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.068 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.327 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:46.327 15:58:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:46.894 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.894 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:46.894 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.894 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.894 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.894 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.894 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:46.894 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.153 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.411 00:21:47.411 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.411 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.411 15:58:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.669 { 00:21:47.669 "cntlid": 63, 00:21:47.669 "qid": 0, 00:21:47.669 "state": "enabled", 00:21:47.669 "thread": "nvmf_tgt_poll_group_000", 00:21:47.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:47.669 "listen_address": { 00:21:47.669 "trtype": "RDMA", 00:21:47.669 "adrfam": "IPv4", 00:21:47.669 "traddr": "192.168.100.8", 00:21:47.669 "trsvcid": "4420" 00:21:47.669 }, 00:21:47.669 "peer_address": { 00:21:47.669 "trtype": "RDMA", 00:21:47.669 "adrfam": "IPv4", 00:21:47.669 "traddr": "192.168.100.8", 00:21:47.669 "trsvcid": "33943" 00:21:47.669 }, 00:21:47.669 "auth": { 00:21:47.669 "state": "completed", 00:21:47.669 "digest": "sha384", 00:21:47.669 "dhgroup": "ffdhe2048" 00:21:47.669 } 00:21:47.669 } 00:21:47.669 ]' 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.669 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.928 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:47.928 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:48.494 15:58:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.753 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.011 00:21:49.011 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.011 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.011 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.269 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.269 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.269 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.269 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.269 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.269 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.269 { 00:21:49.269 "cntlid": 65, 00:21:49.269 "qid": 0, 00:21:49.269 "state": "enabled", 00:21:49.269 "thread": "nvmf_tgt_poll_group_000", 00:21:49.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:49.269 "listen_address": { 00:21:49.269 "trtype": "RDMA", 00:21:49.269 "adrfam": "IPv4", 00:21:49.269 "traddr": "192.168.100.8", 00:21:49.269 "trsvcid": "4420" 00:21:49.269 }, 00:21:49.269 "peer_address": { 00:21:49.269 "trtype": "RDMA", 00:21:49.269 "adrfam": "IPv4", 00:21:49.269 "traddr": "192.168.100.8", 00:21:49.269 "trsvcid": "51222" 00:21:49.269 }, 00:21:49.269 "auth": { 00:21:49.269 "state": "completed", 00:21:49.269 "digest": "sha384", 00:21:49.269 "dhgroup": "ffdhe3072" 00:21:49.269 } 00:21:49.269 } 00:21:49.269 ]' 00:21:49.269 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.269 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.269 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.527 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.527 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.527 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.527 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.527 15:58:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.785 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:49.785 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:50.351 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.351 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:50.351 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.351 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.351 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.351 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.351 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:50.351 15:58:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.610 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.868 00:21:50.868 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.868 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.868 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.127 { 00:21:51.127 "cntlid": 67, 00:21:51.127 "qid": 0, 00:21:51.127 "state": "enabled", 00:21:51.127 "thread": "nvmf_tgt_poll_group_000", 00:21:51.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:51.127 "listen_address": { 00:21:51.127 "trtype": "RDMA", 00:21:51.127 "adrfam": "IPv4", 00:21:51.127 "traddr": "192.168.100.8", 00:21:51.127 "trsvcid": "4420" 00:21:51.127 }, 00:21:51.127 "peer_address": { 00:21:51.127 "trtype": "RDMA", 00:21:51.127 "adrfam": "IPv4", 00:21:51.127 "traddr": "192.168.100.8", 00:21:51.127 "trsvcid": "59592" 00:21:51.127 }, 00:21:51.127 "auth": { 00:21:51.127 "state": "completed", 00:21:51.127 "digest": "sha384", 00:21:51.127 "dhgroup": "ffdhe3072" 00:21:51.127 } 00:21:51.127 } 00:21:51.127 ]' 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.127 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.386 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:51.386 15:58:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:51.952 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.210 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:52.210 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.210 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.210 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.210 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.210 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.210 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.468 15:58:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.727 00:21:52.727 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.727 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.727 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.727 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.727 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.727 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.727 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.727 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.727 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.727 { 00:21:52.727 "cntlid": 69, 00:21:52.727 "qid": 0, 00:21:52.727 "state": "enabled", 00:21:52.727 "thread": "nvmf_tgt_poll_group_000", 00:21:52.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:52.727 "listen_address": { 00:21:52.727 "trtype": "RDMA", 00:21:52.727 "adrfam": "IPv4", 00:21:52.727 "traddr": "192.168.100.8", 00:21:52.727 "trsvcid": "4420" 00:21:52.727 }, 00:21:52.727 "peer_address": { 00:21:52.727 "trtype": "RDMA", 00:21:52.727 "adrfam": "IPv4", 00:21:52.727 "traddr": "192.168.100.8", 00:21:52.727 "trsvcid": "37280" 00:21:52.727 }, 00:21:52.727 "auth": { 00:21:52.727 "state": "completed", 00:21:52.727 "digest": "sha384", 00:21:52.727 "dhgroup": "ffdhe3072" 00:21:52.727 } 00:21:52.727 } 00:21:52.727 ]' 00:21:52.727 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.986 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:52.986 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.986 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.986 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.986 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.986 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.986 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.244 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:53.244 15:58:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:21:53.814 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.814 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:53.814 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.814 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.814 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.814 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.814 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:53.814 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.076 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.334 00:21:54.334 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.334 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.334 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.592 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.592 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.592 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.592 15:58:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.592 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.592 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.592 { 00:21:54.592 "cntlid": 71, 00:21:54.592 "qid": 0, 00:21:54.592 "state": "enabled", 00:21:54.592 "thread": "nvmf_tgt_poll_group_000", 00:21:54.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:54.592 "listen_address": { 00:21:54.592 "trtype": "RDMA", 00:21:54.592 "adrfam": "IPv4", 00:21:54.592 "traddr": "192.168.100.8", 00:21:54.592 "trsvcid": "4420" 00:21:54.592 }, 00:21:54.592 "peer_address": { 00:21:54.592 "trtype": "RDMA", 00:21:54.592 "adrfam": "IPv4", 00:21:54.592 "traddr": "192.168.100.8", 00:21:54.592 "trsvcid": "43183" 00:21:54.592 }, 00:21:54.592 "auth": { 00:21:54.592 "state": "completed", 00:21:54.592 "digest": "sha384", 00:21:54.592 "dhgroup": "ffdhe3072" 00:21:54.592 } 00:21:54.592 } 00:21:54.592 ]' 00:21:54.592 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.592 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.592 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.592 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.592 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.850 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.850 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.850 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.850 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:54.850 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:21:55.418 15:58:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.676 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:55.676 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.676 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.676 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.676 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.676 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.676 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:55.677 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.935 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.194 00:21:56.194 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.194 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.194 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.453 { 00:21:56.453 "cntlid": 73, 00:21:56.453 "qid": 0, 00:21:56.453 "state": "enabled", 00:21:56.453 "thread": "nvmf_tgt_poll_group_000", 00:21:56.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:56.453 "listen_address": { 00:21:56.453 "trtype": "RDMA", 00:21:56.453 "adrfam": "IPv4", 00:21:56.453 "traddr": "192.168.100.8", 00:21:56.453 "trsvcid": "4420" 00:21:56.453 }, 00:21:56.453 "peer_address": { 00:21:56.453 "trtype": "RDMA", 00:21:56.453 "adrfam": "IPv4", 00:21:56.453 "traddr": "192.168.100.8", 00:21:56.453 "trsvcid": "57163" 00:21:56.453 }, 00:21:56.453 "auth": { 00:21:56.453 "state": "completed", 00:21:56.453 "digest": "sha384", 00:21:56.453 "dhgroup": "ffdhe4096" 00:21:56.453 } 00:21:56.453 } 00:21:56.453 ]' 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.453 15:58:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.712 15:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:56.712 15:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:21:57.278 15:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.535 15:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:57.535 15:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.535 15:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.535 15:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.535 15:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.535 15:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.535 15:58:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.535 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:57.535 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.535 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.535 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:57.535 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:57.535 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.535 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.535 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.535 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.793 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.793 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.793 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.793 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:58.051 00:21:58.051 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.051 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.051 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.051 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.051 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.051 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.051 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.051 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.051 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.051 { 00:21:58.051 "cntlid": 75, 00:21:58.051 "qid": 0, 00:21:58.051 "state": "enabled", 00:21:58.051 "thread": "nvmf_tgt_poll_group_000", 00:21:58.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:58.051 "listen_address": { 00:21:58.051 "trtype": "RDMA", 00:21:58.051 "adrfam": "IPv4", 00:21:58.051 "traddr": "192.168.100.8", 00:21:58.051 "trsvcid": "4420" 00:21:58.051 }, 00:21:58.051 "peer_address": { 00:21:58.051 "trtype": "RDMA", 00:21:58.051 "adrfam": "IPv4", 00:21:58.051 "traddr": "192.168.100.8", 00:21:58.051 "trsvcid": "58352" 00:21:58.051 }, 00:21:58.051 "auth": { 00:21:58.051 "state": "completed", 00:21:58.051 "digest": "sha384", 00:21:58.051 "dhgroup": "ffdhe4096" 00:21:58.051 } 00:21:58.051 } 00:21:58.051 ]' 00:21:58.051 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.309 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:58.309 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.309 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.309 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.309 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.309 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.309 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.566 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:58.566 15:58:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:21:59.130 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.130 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:59.131 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.131 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.131 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.131 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.131 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:59.131 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.390 15:58:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.647 00:21:59.647 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.647 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.647 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.905 { 00:21:59.905 "cntlid": 77, 00:21:59.905 "qid": 0, 00:21:59.905 "state": "enabled", 00:21:59.905 "thread": "nvmf_tgt_poll_group_000", 00:21:59.905 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:21:59.905 "listen_address": { 00:21:59.905 "trtype": "RDMA", 00:21:59.905 "adrfam": "IPv4", 00:21:59.905 "traddr": "192.168.100.8", 00:21:59.905 "trsvcid": "4420" 00:21:59.905 }, 00:21:59.905 "peer_address": { 00:21:59.905 "trtype": "RDMA", 00:21:59.905 "adrfam": "IPv4", 00:21:59.905 "traddr": "192.168.100.8", 00:21:59.905 "trsvcid": "39156" 00:21:59.905 }, 00:21:59.905 "auth": { 00:21:59.905 "state": "completed", 00:21:59.905 "digest": "sha384", 00:21:59.905 "dhgroup": "ffdhe4096" 00:21:59.905 } 00:21:59.905 } 00:21:59.905 ]' 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.905 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.162 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.162 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.162 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:00.162 15:58:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.096 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:01.354 00:22:01.354 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.354 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.354 15:58:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.611 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.611 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.611 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.611 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.611 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.612 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.612 { 00:22:01.612 "cntlid": 79, 00:22:01.612 "qid": 0, 00:22:01.612 "state": "enabled", 00:22:01.612 "thread": "nvmf_tgt_poll_group_000", 00:22:01.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:01.612 "listen_address": { 00:22:01.612 "trtype": "RDMA", 00:22:01.612 "adrfam": "IPv4", 00:22:01.612 "traddr": "192.168.100.8", 00:22:01.612 "trsvcid": "4420" 00:22:01.612 }, 00:22:01.612 "peer_address": { 00:22:01.612 "trtype": "RDMA", 00:22:01.612 "adrfam": "IPv4", 00:22:01.612 "traddr": "192.168.100.8", 00:22:01.612 "trsvcid": "45121" 00:22:01.612 }, 00:22:01.612 "auth": { 00:22:01.612 "state": "completed", 00:22:01.612 "digest": "sha384", 00:22:01.612 "dhgroup": "ffdhe4096" 00:22:01.612 } 00:22:01.612 } 00:22:01.612 ]' 00:22:01.612 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.612 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.612 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.869 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.869 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.869 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.869 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.869 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.126 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:02.126 15:58:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:02.691 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.691 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:02.691 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.691 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.691 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.691 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:02.691 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.691 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:02.691 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.948 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:03.206 00:22:03.206 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.206 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.206 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.464 { 00:22:03.464 "cntlid": 81, 00:22:03.464 "qid": 0, 00:22:03.464 "state": "enabled", 00:22:03.464 "thread": "nvmf_tgt_poll_group_000", 00:22:03.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:03.464 "listen_address": { 00:22:03.464 "trtype": "RDMA", 00:22:03.464 "adrfam": "IPv4", 00:22:03.464 "traddr": "192.168.100.8", 00:22:03.464 "trsvcid": "4420" 00:22:03.464 }, 00:22:03.464 "peer_address": { 00:22:03.464 "trtype": "RDMA", 00:22:03.464 "adrfam": "IPv4", 00:22:03.464 "traddr": "192.168.100.8", 00:22:03.464 "trsvcid": "36542" 00:22:03.464 }, 00:22:03.464 "auth": { 00:22:03.464 "state": "completed", 00:22:03.464 "digest": "sha384", 00:22:03.464 "dhgroup": "ffdhe6144" 00:22:03.464 } 00:22:03.464 } 00:22:03.464 ]' 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:03.464 15:58:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.722 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.722 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.722 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.722 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:03.722 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:04.655 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.656 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:04.656 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.656 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.656 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.656 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.656 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:04.656 15:58:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.656 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:05.295 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.295 { 00:22:05.295 "cntlid": 83, 00:22:05.295 "qid": 0, 00:22:05.295 "state": "enabled", 00:22:05.295 "thread": "nvmf_tgt_poll_group_000", 00:22:05.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:05.295 "listen_address": { 00:22:05.295 "trtype": "RDMA", 00:22:05.295 "adrfam": "IPv4", 00:22:05.295 "traddr": "192.168.100.8", 00:22:05.295 "trsvcid": "4420" 00:22:05.295 }, 00:22:05.295 "peer_address": { 00:22:05.295 "trtype": "RDMA", 00:22:05.295 "adrfam": "IPv4", 00:22:05.295 "traddr": "192.168.100.8", 00:22:05.295 "trsvcid": "42474" 00:22:05.295 }, 00:22:05.295 "auth": { 00:22:05.295 "state": "completed", 00:22:05.295 "digest": "sha384", 00:22:05.295 "dhgroup": "ffdhe6144" 00:22:05.295 } 00:22:05.295 } 00:22:05.295 ]' 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.295 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.553 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.553 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.553 15:58:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.553 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:05.553 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.485 15:58:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.485 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.485 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.485 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.049 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.049 { 00:22:07.049 "cntlid": 85, 00:22:07.049 "qid": 0, 00:22:07.049 "state": "enabled", 00:22:07.049 "thread": "nvmf_tgt_poll_group_000", 00:22:07.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:07.049 "listen_address": { 00:22:07.049 "trtype": "RDMA", 00:22:07.049 "adrfam": "IPv4", 00:22:07.049 "traddr": "192.168.100.8", 00:22:07.049 "trsvcid": "4420" 00:22:07.049 }, 00:22:07.049 "peer_address": { 00:22:07.049 "trtype": "RDMA", 00:22:07.049 "adrfam": "IPv4", 00:22:07.049 "traddr": "192.168.100.8", 00:22:07.049 "trsvcid": "57206" 00:22:07.049 }, 00:22:07.049 "auth": { 00:22:07.049 "state": "completed", 00:22:07.049 "digest": "sha384", 00:22:07.049 "dhgroup": "ffdhe6144" 00:22:07.049 } 00:22:07.049 } 00:22:07.049 ]' 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.049 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.305 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.305 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.305 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.305 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.305 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.562 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:07.562 15:58:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:08.125 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.125 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:08.125 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.125 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.125 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.125 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.125 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:08.125 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:08.383 15:58:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:08.948 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.948 { 00:22:08.948 "cntlid": 87, 00:22:08.948 "qid": 0, 00:22:08.948 "state": "enabled", 00:22:08.948 "thread": "nvmf_tgt_poll_group_000", 00:22:08.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:08.948 "listen_address": { 00:22:08.948 "trtype": "RDMA", 00:22:08.948 "adrfam": "IPv4", 00:22:08.948 "traddr": "192.168.100.8", 00:22:08.948 "trsvcid": "4420" 00:22:08.948 }, 00:22:08.948 "peer_address": { 00:22:08.948 "trtype": "RDMA", 00:22:08.948 "adrfam": "IPv4", 00:22:08.948 "traddr": "192.168.100.8", 00:22:08.948 "trsvcid": "42409" 00:22:08.948 }, 00:22:08.948 "auth": { 00:22:08.948 "state": "completed", 00:22:08.948 "digest": "sha384", 00:22:08.948 "dhgroup": "ffdhe6144" 00:22:08.948 } 00:22:08.948 } 00:22:08.948 ]' 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:08.948 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.205 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.205 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.205 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.462 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:09.462 15:58:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:10.026 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.026 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:10.026 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.026 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.026 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.026 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:10.026 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.026 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:10.026 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.283 15:58:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:10.847 00:22:10.847 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.847 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.847 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.847 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.847 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.847 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.847 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.847 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.847 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.847 { 00:22:10.847 "cntlid": 89, 00:22:10.847 "qid": 0, 00:22:10.847 "state": "enabled", 00:22:10.847 "thread": "nvmf_tgt_poll_group_000", 00:22:10.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:10.847 "listen_address": { 00:22:10.847 "trtype": "RDMA", 00:22:10.847 "adrfam": "IPv4", 00:22:10.847 "traddr": "192.168.100.8", 00:22:10.847 "trsvcid": "4420" 00:22:10.847 }, 00:22:10.847 "peer_address": { 00:22:10.847 "trtype": "RDMA", 00:22:10.847 "adrfam": "IPv4", 00:22:10.847 "traddr": "192.168.100.8", 00:22:10.847 "trsvcid": "48332" 00:22:10.847 }, 00:22:10.847 "auth": { 00:22:10.847 "state": "completed", 00:22:10.847 "digest": "sha384", 00:22:10.847 "dhgroup": "ffdhe8192" 00:22:10.847 } 00:22:10.847 } 00:22:10.847 ]' 00:22:10.847 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.104 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.104 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.104 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.104 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.104 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.104 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.104 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.361 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:11.361 15:58:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:11.924 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.924 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:11.924 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.924 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.924 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.924 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.924 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:11.924 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:12.181 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:12.181 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.181 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:12.181 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:12.181 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:12.182 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.182 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.182 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.182 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.182 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.182 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.182 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.182 15:58:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.749 00:22:12.749 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.749 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.749 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.006 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.006 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.006 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.006 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.006 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.006 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.006 { 00:22:13.006 "cntlid": 91, 00:22:13.006 "qid": 0, 00:22:13.006 "state": "enabled", 00:22:13.006 "thread": "nvmf_tgt_poll_group_000", 00:22:13.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:13.006 "listen_address": { 00:22:13.006 "trtype": "RDMA", 00:22:13.006 "adrfam": "IPv4", 00:22:13.006 "traddr": "192.168.100.8", 00:22:13.006 "trsvcid": "4420" 00:22:13.006 }, 00:22:13.006 "peer_address": { 00:22:13.006 "trtype": "RDMA", 00:22:13.006 "adrfam": "IPv4", 00:22:13.006 "traddr": "192.168.100.8", 00:22:13.006 "trsvcid": "35497" 00:22:13.006 }, 00:22:13.006 "auth": { 00:22:13.006 "state": "completed", 00:22:13.006 "digest": "sha384", 00:22:13.006 "dhgroup": "ffdhe8192" 00:22:13.006 } 00:22:13.007 } 00:22:13.007 ]' 00:22:13.007 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.007 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.007 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.007 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.007 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.007 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.007 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.007 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.264 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:13.264 15:58:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:13.828 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.087 15:58:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:14.652 00:22:14.652 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.652 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.652 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.910 { 00:22:14.910 "cntlid": 93, 00:22:14.910 "qid": 0, 00:22:14.910 "state": "enabled", 00:22:14.910 "thread": "nvmf_tgt_poll_group_000", 00:22:14.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:14.910 "listen_address": { 00:22:14.910 "trtype": "RDMA", 00:22:14.910 "adrfam": "IPv4", 00:22:14.910 "traddr": "192.168.100.8", 00:22:14.910 "trsvcid": "4420" 00:22:14.910 }, 00:22:14.910 "peer_address": { 00:22:14.910 "trtype": "RDMA", 00:22:14.910 "adrfam": "IPv4", 00:22:14.910 "traddr": "192.168.100.8", 00:22:14.910 "trsvcid": "53005" 00:22:14.910 }, 00:22:14.910 "auth": { 00:22:14.910 "state": "completed", 00:22:14.910 "digest": "sha384", 00:22:14.910 "dhgroup": "ffdhe8192" 00:22:14.910 } 00:22:14.910 } 00:22:14.910 ]' 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.910 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.167 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:15.167 15:59:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:15.732 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.989 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.246 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.246 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:16.246 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.246 15:59:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:16.503 00:22:16.503 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.503 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.503 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.761 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.761 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.761 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.761 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.761 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.761 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.761 { 00:22:16.761 "cntlid": 95, 00:22:16.761 "qid": 0, 00:22:16.761 "state": "enabled", 00:22:16.761 "thread": "nvmf_tgt_poll_group_000", 00:22:16.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:16.761 "listen_address": { 00:22:16.761 "trtype": "RDMA", 00:22:16.761 "adrfam": "IPv4", 00:22:16.761 "traddr": "192.168.100.8", 00:22:16.761 "trsvcid": "4420" 00:22:16.761 }, 00:22:16.761 "peer_address": { 00:22:16.761 "trtype": "RDMA", 00:22:16.761 "adrfam": "IPv4", 00:22:16.761 "traddr": "192.168.100.8", 00:22:16.761 "trsvcid": "40613" 00:22:16.761 }, 00:22:16.761 "auth": { 00:22:16.761 "state": "completed", 00:22:16.761 "digest": "sha384", 00:22:16.761 "dhgroup": "ffdhe8192" 00:22:16.761 } 00:22:16.761 } 00:22:16.761 ]' 00:22:16.761 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.761 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:16.761 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.018 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:17.018 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.018 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.018 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.018 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.276 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:17.276 15:59:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:17.841 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.842 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:17.842 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.842 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.842 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.842 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:17.842 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:17.842 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.842 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:17.842 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.100 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:18.357 00:22:18.357 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.357 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.357 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.615 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.615 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.615 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.615 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.615 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.615 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.615 { 00:22:18.615 "cntlid": 97, 00:22:18.615 "qid": 0, 00:22:18.615 "state": "enabled", 00:22:18.615 "thread": "nvmf_tgt_poll_group_000", 00:22:18.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:18.615 "listen_address": { 00:22:18.615 "trtype": "RDMA", 00:22:18.615 "adrfam": "IPv4", 00:22:18.615 "traddr": "192.168.100.8", 00:22:18.615 "trsvcid": "4420" 00:22:18.615 }, 00:22:18.615 "peer_address": { 00:22:18.615 "trtype": "RDMA", 00:22:18.615 "adrfam": "IPv4", 00:22:18.615 "traddr": "192.168.100.8", 00:22:18.615 "trsvcid": "45947" 00:22:18.615 }, 00:22:18.615 "auth": { 00:22:18.615 "state": "completed", 00:22:18.615 "digest": "sha512", 00:22:18.615 "dhgroup": "null" 00:22:18.615 } 00:22:18.615 } 00:22:18.615 ]' 00:22:18.615 15:59:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.615 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.615 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.615 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:18.615 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.615 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.615 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.615 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.873 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:18.873 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:19.439 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.697 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:19.697 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.697 15:59:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.697 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.955 00:22:19.955 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.955 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.955 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:20.212 { 00:22:20.212 "cntlid": 99, 00:22:20.212 "qid": 0, 00:22:20.212 "state": "enabled", 00:22:20.212 "thread": "nvmf_tgt_poll_group_000", 00:22:20.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:20.212 "listen_address": { 00:22:20.212 "trtype": "RDMA", 00:22:20.212 "adrfam": "IPv4", 00:22:20.212 "traddr": "192.168.100.8", 00:22:20.212 "trsvcid": "4420" 00:22:20.212 }, 00:22:20.212 "peer_address": { 00:22:20.212 "trtype": "RDMA", 00:22:20.212 "adrfam": "IPv4", 00:22:20.212 "traddr": "192.168.100.8", 00:22:20.212 "trsvcid": "53428" 00:22:20.212 }, 00:22:20.212 "auth": { 00:22:20.212 "state": "completed", 00:22:20.212 "digest": "sha512", 00:22:20.212 "dhgroup": "null" 00:22:20.212 } 00:22:20.212 } 00:22:20.212 ]' 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:20.212 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.470 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.470 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.470 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.470 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:20.470 15:59:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.404 15:59:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.663 00:22:21.663 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.663 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.663 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.921 { 00:22:21.921 "cntlid": 101, 00:22:21.921 "qid": 0, 00:22:21.921 "state": "enabled", 00:22:21.921 "thread": "nvmf_tgt_poll_group_000", 00:22:21.921 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:21.921 "listen_address": { 00:22:21.921 "trtype": "RDMA", 00:22:21.921 "adrfam": "IPv4", 00:22:21.921 "traddr": "192.168.100.8", 00:22:21.921 "trsvcid": "4420" 00:22:21.921 }, 00:22:21.921 "peer_address": { 00:22:21.921 "trtype": "RDMA", 00:22:21.921 "adrfam": "IPv4", 00:22:21.921 "traddr": "192.168.100.8", 00:22:21.921 "trsvcid": "40503" 00:22:21.921 }, 00:22:21.921 "auth": { 00:22:21.921 "state": "completed", 00:22:21.921 "digest": "sha512", 00:22:21.921 "dhgroup": "null" 00:22:21.921 } 00:22:21.921 } 00:22:21.921 ]' 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:21.921 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.179 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.179 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.179 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.179 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:22.179 15:59:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.114 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.372 00:22:23.372 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.372 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.372 15:59:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.630 { 00:22:23.630 "cntlid": 103, 00:22:23.630 "qid": 0, 00:22:23.630 "state": "enabled", 00:22:23.630 "thread": "nvmf_tgt_poll_group_000", 00:22:23.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:23.630 "listen_address": { 00:22:23.630 "trtype": "RDMA", 00:22:23.630 "adrfam": "IPv4", 00:22:23.630 "traddr": "192.168.100.8", 00:22:23.630 "trsvcid": "4420" 00:22:23.630 }, 00:22:23.630 "peer_address": { 00:22:23.630 "trtype": "RDMA", 00:22:23.630 "adrfam": "IPv4", 00:22:23.630 "traddr": "192.168.100.8", 00:22:23.630 "trsvcid": "51505" 00:22:23.630 }, 00:22:23.630 "auth": { 00:22:23.630 "state": "completed", 00:22:23.630 "digest": "sha512", 00:22:23.630 "dhgroup": "null" 00:22:23.630 } 00:22:23.630 } 00:22:23.630 ]' 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:23.630 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.887 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.887 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.887 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.887 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:23.887 15:59:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:24.819 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:24.820 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.078 00:22:25.078 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.078 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.078 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.336 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.336 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.336 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.336 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.336 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.336 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.336 { 00:22:25.336 "cntlid": 105, 00:22:25.336 "qid": 0, 00:22:25.336 "state": "enabled", 00:22:25.336 "thread": "nvmf_tgt_poll_group_000", 00:22:25.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:25.336 "listen_address": { 00:22:25.336 "trtype": "RDMA", 00:22:25.336 "adrfam": "IPv4", 00:22:25.336 "traddr": "192.168.100.8", 00:22:25.336 "trsvcid": "4420" 00:22:25.336 }, 00:22:25.336 "peer_address": { 00:22:25.336 "trtype": "RDMA", 00:22:25.336 "adrfam": "IPv4", 00:22:25.336 "traddr": "192.168.100.8", 00:22:25.336 "trsvcid": "58004" 00:22:25.336 }, 00:22:25.336 "auth": { 00:22:25.336 "state": "completed", 00:22:25.336 "digest": "sha512", 00:22:25.336 "dhgroup": "ffdhe2048" 00:22:25.336 } 00:22:25.336 } 00:22:25.336 ]' 00:22:25.336 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.593 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:25.593 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.593 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:25.593 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.593 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.593 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.593 15:59:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.851 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:25.851 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:26.417 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.417 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:26.417 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.417 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.417 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.417 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.417 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:26.417 15:59:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.675 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:26.933 00:22:26.933 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.933 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.933 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.191 { 00:22:27.191 "cntlid": 107, 00:22:27.191 "qid": 0, 00:22:27.191 "state": "enabled", 00:22:27.191 "thread": "nvmf_tgt_poll_group_000", 00:22:27.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:27.191 "listen_address": { 00:22:27.191 "trtype": "RDMA", 00:22:27.191 "adrfam": "IPv4", 00:22:27.191 "traddr": "192.168.100.8", 00:22:27.191 "trsvcid": "4420" 00:22:27.191 }, 00:22:27.191 "peer_address": { 00:22:27.191 "trtype": "RDMA", 00:22:27.191 "adrfam": "IPv4", 00:22:27.191 "traddr": "192.168.100.8", 00:22:27.191 "trsvcid": "45940" 00:22:27.191 }, 00:22:27.191 "auth": { 00:22:27.191 "state": "completed", 00:22:27.191 "digest": "sha512", 00:22:27.191 "dhgroup": "ffdhe2048" 00:22:27.191 } 00:22:27.191 } 00:22:27.191 ]' 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.191 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.449 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:27.449 15:59:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:28.014 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.273 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.530 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.530 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.530 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.530 15:59:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.530 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.800 { 00:22:28.800 "cntlid": 109, 00:22:28.800 "qid": 0, 00:22:28.800 "state": "enabled", 00:22:28.800 "thread": "nvmf_tgt_poll_group_000", 00:22:28.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:28.800 "listen_address": { 00:22:28.800 "trtype": "RDMA", 00:22:28.800 "adrfam": "IPv4", 00:22:28.800 "traddr": "192.168.100.8", 00:22:28.800 "trsvcid": "4420" 00:22:28.800 }, 00:22:28.800 "peer_address": { 00:22:28.800 "trtype": "RDMA", 00:22:28.800 "adrfam": "IPv4", 00:22:28.800 "traddr": "192.168.100.8", 00:22:28.800 "trsvcid": "43754" 00:22:28.800 }, 00:22:28.800 "auth": { 00:22:28.800 "state": "completed", 00:22:28.800 "digest": "sha512", 00:22:28.800 "dhgroup": "ffdhe2048" 00:22:28.800 } 00:22:28.800 } 00:22:28.800 ]' 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.800 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.135 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:29.135 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.135 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.135 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.135 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.135 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:29.135 15:59:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:29.702 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.961 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:29.961 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.961 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.961 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.961 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.961 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:29.961 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:30.221 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:30.480 00:22:30.480 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.480 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.480 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.480 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.480 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.480 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.480 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.480 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.480 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.480 { 00:22:30.480 "cntlid": 111, 00:22:30.480 "qid": 0, 00:22:30.480 "state": "enabled", 00:22:30.480 "thread": "nvmf_tgt_poll_group_000", 00:22:30.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:30.480 "listen_address": { 00:22:30.480 "trtype": "RDMA", 00:22:30.480 "adrfam": "IPv4", 00:22:30.480 "traddr": "192.168.100.8", 00:22:30.480 "trsvcid": "4420" 00:22:30.480 }, 00:22:30.480 "peer_address": { 00:22:30.480 "trtype": "RDMA", 00:22:30.480 "adrfam": "IPv4", 00:22:30.480 "traddr": "192.168.100.8", 00:22:30.480 "trsvcid": "57331" 00:22:30.480 }, 00:22:30.480 "auth": { 00:22:30.480 "state": "completed", 00:22:30.480 "digest": "sha512", 00:22:30.480 "dhgroup": "ffdhe2048" 00:22:30.480 } 00:22:30.480 } 00:22:30.480 ]' 00:22:30.480 15:59:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.739 15:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.739 15:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.739 15:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:30.739 15:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.739 15:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.739 15:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.739 15:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.998 15:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:30.998 15:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:31.567 15:59:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.567 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:31.567 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.567 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.567 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.567 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.567 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.567 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.567 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:31.826 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.084 00:22:32.084 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.084 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.084 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.343 { 00:22:32.343 "cntlid": 113, 00:22:32.343 "qid": 0, 00:22:32.343 "state": "enabled", 00:22:32.343 "thread": "nvmf_tgt_poll_group_000", 00:22:32.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:32.343 "listen_address": { 00:22:32.343 "trtype": "RDMA", 00:22:32.343 "adrfam": "IPv4", 00:22:32.343 "traddr": "192.168.100.8", 00:22:32.343 "trsvcid": "4420" 00:22:32.343 }, 00:22:32.343 "peer_address": { 00:22:32.343 "trtype": "RDMA", 00:22:32.343 "adrfam": "IPv4", 00:22:32.343 "traddr": "192.168.100.8", 00:22:32.343 "trsvcid": "39911" 00:22:32.343 }, 00:22:32.343 "auth": { 00:22:32.343 "state": "completed", 00:22:32.343 "digest": "sha512", 00:22:32.343 "dhgroup": "ffdhe3072" 00:22:32.343 } 00:22:32.343 } 00:22:32.343 ]' 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.343 15:59:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.601 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:32.601 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:33.168 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.426 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.685 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.685 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.685 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.685 15:59:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.685 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.943 { 00:22:33.943 "cntlid": 115, 00:22:33.943 "qid": 0, 00:22:33.943 "state": "enabled", 00:22:33.943 "thread": "nvmf_tgt_poll_group_000", 00:22:33.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:33.943 "listen_address": { 00:22:33.943 "trtype": "RDMA", 00:22:33.943 "adrfam": "IPv4", 00:22:33.943 "traddr": "192.168.100.8", 00:22:33.943 "trsvcid": "4420" 00:22:33.943 }, 00:22:33.943 "peer_address": { 00:22:33.943 "trtype": "RDMA", 00:22:33.943 "adrfam": "IPv4", 00:22:33.943 "traddr": "192.168.100.8", 00:22:33.943 "trsvcid": "50130" 00:22:33.943 }, 00:22:33.943 "auth": { 00:22:33.943 "state": "completed", 00:22:33.943 "digest": "sha512", 00:22:33.943 "dhgroup": "ffdhe3072" 00:22:33.943 } 00:22:33.943 } 00:22:33.943 ]' 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.943 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.201 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:34.201 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.201 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.201 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.201 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.459 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:34.459 15:59:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:35.026 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:35.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:35.026 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:35.026 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.026 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.026 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.026 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:35.026 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:35.026 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.284 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.542 00:22:35.542 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.542 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.542 15:59:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.800 { 00:22:35.800 "cntlid": 117, 00:22:35.800 "qid": 0, 00:22:35.800 "state": "enabled", 00:22:35.800 "thread": "nvmf_tgt_poll_group_000", 00:22:35.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:35.800 "listen_address": { 00:22:35.800 "trtype": "RDMA", 00:22:35.800 "adrfam": "IPv4", 00:22:35.800 "traddr": "192.168.100.8", 00:22:35.800 "trsvcid": "4420" 00:22:35.800 }, 00:22:35.800 "peer_address": { 00:22:35.800 "trtype": "RDMA", 00:22:35.800 "adrfam": "IPv4", 00:22:35.800 "traddr": "192.168.100.8", 00:22:35.800 "trsvcid": "58530" 00:22:35.800 }, 00:22:35.800 "auth": { 00:22:35.800 "state": "completed", 00:22:35.800 "digest": "sha512", 00:22:35.800 "dhgroup": "ffdhe3072" 00:22:35.800 } 00:22:35.800 } 00:22:35.800 ]' 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.800 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.057 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:36.057 15:59:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:36.622 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.880 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.138 00:22:37.138 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.138 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.138 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.396 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.396 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.396 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.396 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.396 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.396 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.396 { 00:22:37.396 "cntlid": 119, 00:22:37.396 "qid": 0, 00:22:37.396 "state": "enabled", 00:22:37.396 "thread": "nvmf_tgt_poll_group_000", 00:22:37.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:37.396 "listen_address": { 00:22:37.396 "trtype": "RDMA", 00:22:37.396 "adrfam": "IPv4", 00:22:37.396 "traddr": "192.168.100.8", 00:22:37.396 "trsvcid": "4420" 00:22:37.396 }, 00:22:37.396 "peer_address": { 00:22:37.396 "trtype": "RDMA", 00:22:37.396 "adrfam": "IPv4", 00:22:37.396 "traddr": "192.168.100.8", 00:22:37.396 "trsvcid": "33522" 00:22:37.396 }, 00:22:37.396 "auth": { 00:22:37.396 "state": "completed", 00:22:37.396 "digest": "sha512", 00:22:37.396 "dhgroup": "ffdhe3072" 00:22:37.396 } 00:22:37.396 } 00:22:37.396 ]' 00:22:37.396 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.396 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.396 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.654 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:37.654 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.654 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.654 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.654 15:59:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.912 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:37.912 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:38.487 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.487 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:38.487 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.487 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.487 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.487 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:38.487 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.487 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:38.487 15:59:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.745 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:39.002 00:22:39.002 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.002 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.002 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.260 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.260 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.260 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.260 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.260 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.260 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.260 { 00:22:39.260 "cntlid": 121, 00:22:39.260 "qid": 0, 00:22:39.260 "state": "enabled", 00:22:39.260 "thread": "nvmf_tgt_poll_group_000", 00:22:39.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:39.260 "listen_address": { 00:22:39.260 "trtype": "RDMA", 00:22:39.260 "adrfam": "IPv4", 00:22:39.260 "traddr": "192.168.100.8", 00:22:39.260 "trsvcid": "4420" 00:22:39.260 }, 00:22:39.260 "peer_address": { 00:22:39.260 "trtype": "RDMA", 00:22:39.260 "adrfam": "IPv4", 00:22:39.260 "traddr": "192.168.100.8", 00:22:39.260 "trsvcid": "32963" 00:22:39.260 }, 00:22:39.260 "auth": { 00:22:39.260 "state": "completed", 00:22:39.260 "digest": "sha512", 00:22:39.260 "dhgroup": "ffdhe4096" 00:22:39.260 } 00:22:39.260 } 00:22:39.260 ]' 00:22:39.260 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.260 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.261 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.261 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:39.261 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.261 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.261 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.261 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.518 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:39.518 15:59:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:40.083 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:40.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:40.340 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:40.340 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.340 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.340 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.340 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:40.340 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:40.340 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.598 15:59:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.856 00:22:40.856 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.856 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.856 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.856 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.113 { 00:22:41.113 "cntlid": 123, 00:22:41.113 "qid": 0, 00:22:41.113 "state": "enabled", 00:22:41.113 "thread": "nvmf_tgt_poll_group_000", 00:22:41.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:41.113 "listen_address": { 00:22:41.113 "trtype": "RDMA", 00:22:41.113 "adrfam": "IPv4", 00:22:41.113 "traddr": "192.168.100.8", 00:22:41.113 "trsvcid": "4420" 00:22:41.113 }, 00:22:41.113 "peer_address": { 00:22:41.113 "trtype": "RDMA", 00:22:41.113 "adrfam": "IPv4", 00:22:41.113 "traddr": "192.168.100.8", 00:22:41.113 "trsvcid": "56751" 00:22:41.113 }, 00:22:41.113 "auth": { 00:22:41.113 "state": "completed", 00:22:41.113 "digest": "sha512", 00:22:41.113 "dhgroup": "ffdhe4096" 00:22:41.113 } 00:22:41.113 } 00:22:41.113 ]' 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:41.113 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:41.114 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:41.371 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:41.371 15:59:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:41.936 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:42.193 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.194 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.194 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.194 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.194 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.194 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.194 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:42.451 00:22:42.451 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:42.451 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:42.451 15:59:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.708 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.708 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.708 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.708 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.708 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.708 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.708 { 00:22:42.708 "cntlid": 125, 00:22:42.708 "qid": 0, 00:22:42.708 "state": "enabled", 00:22:42.708 "thread": "nvmf_tgt_poll_group_000", 00:22:42.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:42.708 "listen_address": { 00:22:42.708 "trtype": "RDMA", 00:22:42.708 "adrfam": "IPv4", 00:22:42.708 "traddr": "192.168.100.8", 00:22:42.708 "trsvcid": "4420" 00:22:42.708 }, 00:22:42.708 "peer_address": { 00:22:42.708 "trtype": "RDMA", 00:22:42.708 "adrfam": "IPv4", 00:22:42.708 "traddr": "192.168.100.8", 00:22:42.708 "trsvcid": "42489" 00:22:42.708 }, 00:22:42.708 "auth": { 00:22:42.708 "state": "completed", 00:22:42.708 "digest": "sha512", 00:22:42.708 "dhgroup": "ffdhe4096" 00:22:42.708 } 00:22:42.708 } 00:22:42.708 ]' 00:22:42.708 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.709 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.709 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.966 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:42.966 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.966 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.966 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.966 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.223 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:43.223 15:59:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:43.806 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.806 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:43.806 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.806 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.807 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.807 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.807 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:43.807 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:44.072 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:44.328 00:22:44.328 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.328 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.328 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.585 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.585 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.585 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.585 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.585 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.585 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.585 { 00:22:44.585 "cntlid": 127, 00:22:44.585 "qid": 0, 00:22:44.585 "state": "enabled", 00:22:44.585 "thread": "nvmf_tgt_poll_group_000", 00:22:44.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:44.585 "listen_address": { 00:22:44.585 "trtype": "RDMA", 00:22:44.585 "adrfam": "IPv4", 00:22:44.585 "traddr": "192.168.100.8", 00:22:44.585 "trsvcid": "4420" 00:22:44.585 }, 00:22:44.585 "peer_address": { 00:22:44.585 "trtype": "RDMA", 00:22:44.585 "adrfam": "IPv4", 00:22:44.585 "traddr": "192.168.100.8", 00:22:44.585 "trsvcid": "44540" 00:22:44.585 }, 00:22:44.585 "auth": { 00:22:44.585 "state": "completed", 00:22:44.585 "digest": "sha512", 00:22:44.585 "dhgroup": "ffdhe4096" 00:22:44.585 } 00:22:44.585 } 00:22:44.585 ]' 00:22:44.585 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.585 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.585 15:59:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.585 15:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:44.585 15:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.585 15:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.585 15:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.585 15:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.842 15:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:44.842 15:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:45.407 15:59:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.664 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:45.664 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.664 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.664 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.664 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:45.664 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.665 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.665 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.921 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:46.179 00:22:46.179 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.179 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.179 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.436 { 00:22:46.436 "cntlid": 129, 00:22:46.436 "qid": 0, 00:22:46.436 "state": "enabled", 00:22:46.436 "thread": "nvmf_tgt_poll_group_000", 00:22:46.436 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:46.436 "listen_address": { 00:22:46.436 "trtype": "RDMA", 00:22:46.436 "adrfam": "IPv4", 00:22:46.436 "traddr": "192.168.100.8", 00:22:46.436 "trsvcid": "4420" 00:22:46.436 }, 00:22:46.436 "peer_address": { 00:22:46.436 "trtype": "RDMA", 00:22:46.436 "adrfam": "IPv4", 00:22:46.436 "traddr": "192.168.100.8", 00:22:46.436 "trsvcid": "40316" 00:22:46.436 }, 00:22:46.436 "auth": { 00:22:46.436 "state": "completed", 00:22:46.436 "digest": "sha512", 00:22:46.436 "dhgroup": "ffdhe6144" 00:22:46.436 } 00:22:46.436 } 00:22:46.436 ]' 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:46.436 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.437 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.437 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.437 15:59:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.694 15:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:46.694 15:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:47.259 15:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.516 15:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:47.516 15:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.516 15:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.516 15:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.516 15:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.516 15:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.516 15:59:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:47.516 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:47.516 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:47.516 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:47.516 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:47.516 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:47.516 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:47.517 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.517 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.517 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.517 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.517 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.517 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.517 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:48.082 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.082 { 00:22:48.082 "cntlid": 131, 00:22:48.082 "qid": 0, 00:22:48.082 "state": "enabled", 00:22:48.082 "thread": "nvmf_tgt_poll_group_000", 00:22:48.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:48.082 "listen_address": { 00:22:48.082 "trtype": "RDMA", 00:22:48.082 "adrfam": "IPv4", 00:22:48.082 "traddr": "192.168.100.8", 00:22:48.082 "trsvcid": "4420" 00:22:48.082 }, 00:22:48.082 "peer_address": { 00:22:48.082 "trtype": "RDMA", 00:22:48.082 "adrfam": "IPv4", 00:22:48.082 "traddr": "192.168.100.8", 00:22:48.082 "trsvcid": "35726" 00:22:48.082 }, 00:22:48.082 "auth": { 00:22:48.082 "state": "completed", 00:22:48.082 "digest": "sha512", 00:22:48.082 "dhgroup": "ffdhe6144" 00:22:48.082 } 00:22:48.082 } 00:22:48.082 ]' 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:48.082 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.339 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:48.339 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.339 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.339 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.339 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:48.597 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:48.597 15:59:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:49.161 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.161 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.161 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:49.161 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.161 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.161 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.161 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.161 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:49.161 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.419 15:59:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.676 00:22:49.676 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.676 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.676 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.934 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.934 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.934 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.934 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.934 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.934 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.934 { 00:22:49.934 "cntlid": 133, 00:22:49.934 "qid": 0, 00:22:49.934 "state": "enabled", 00:22:49.934 "thread": "nvmf_tgt_poll_group_000", 00:22:49.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:49.934 "listen_address": { 00:22:49.934 "trtype": "RDMA", 00:22:49.934 "adrfam": "IPv4", 00:22:49.934 "traddr": "192.168.100.8", 00:22:49.934 "trsvcid": "4420" 00:22:49.934 }, 00:22:49.934 "peer_address": { 00:22:49.934 "trtype": "RDMA", 00:22:49.934 "adrfam": "IPv4", 00:22:49.934 "traddr": "192.168.100.8", 00:22:49.934 "trsvcid": "60987" 00:22:49.934 }, 00:22:49.934 "auth": { 00:22:49.934 "state": "completed", 00:22:49.934 "digest": "sha512", 00:22:49.934 "dhgroup": "ffdhe6144" 00:22:49.934 } 00:22:49.934 } 00:22:49.934 ]' 00:22:49.934 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.934 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.934 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.191 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:50.191 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.191 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.191 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.191 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.191 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:50.191 15:59:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:51.123 15:59:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:51.688 00:22:51.688 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.688 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.688 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.688 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.688 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.688 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.688 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.688 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.688 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.688 { 00:22:51.688 "cntlid": 135, 00:22:51.688 "qid": 0, 00:22:51.688 "state": "enabled", 00:22:51.688 "thread": "nvmf_tgt_poll_group_000", 00:22:51.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:51.688 "listen_address": { 00:22:51.688 "trtype": "RDMA", 00:22:51.688 "adrfam": "IPv4", 00:22:51.688 "traddr": "192.168.100.8", 00:22:51.688 "trsvcid": "4420" 00:22:51.688 }, 00:22:51.688 "peer_address": { 00:22:51.688 "trtype": "RDMA", 00:22:51.688 "adrfam": "IPv4", 00:22:51.688 "traddr": "192.168.100.8", 00:22:51.688 "trsvcid": "45922" 00:22:51.688 }, 00:22:51.688 "auth": { 00:22:51.688 "state": "completed", 00:22:51.688 "digest": "sha512", 00:22:51.688 "dhgroup": "ffdhe6144" 00:22:51.688 } 00:22:51.688 } 00:22:51.688 ]' 00:22:51.688 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.945 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.945 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.945 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:51.945 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.945 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.945 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.946 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.203 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:52.203 15:59:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:22:52.825 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.825 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:52.825 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.825 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.825 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.825 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:52.825 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.825 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:52.825 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.083 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.084 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.648 00:22:53.648 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.648 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.648 15:59:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.648 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.649 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.649 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.649 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.906 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.906 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.906 { 00:22:53.906 "cntlid": 137, 00:22:53.906 "qid": 0, 00:22:53.906 "state": "enabled", 00:22:53.906 "thread": "nvmf_tgt_poll_group_000", 00:22:53.906 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:53.906 "listen_address": { 00:22:53.906 "trtype": "RDMA", 00:22:53.906 "adrfam": "IPv4", 00:22:53.906 "traddr": "192.168.100.8", 00:22:53.906 "trsvcid": "4420" 00:22:53.906 }, 00:22:53.906 "peer_address": { 00:22:53.906 "trtype": "RDMA", 00:22:53.906 "adrfam": "IPv4", 00:22:53.906 "traddr": "192.168.100.8", 00:22:53.906 "trsvcid": "43146" 00:22:53.906 }, 00:22:53.906 "auth": { 00:22:53.906 "state": "completed", 00:22:53.906 "digest": "sha512", 00:22:53.906 "dhgroup": "ffdhe8192" 00:22:53.906 } 00:22:53.906 } 00:22:53.906 ]' 00:22:53.906 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.906 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:53.906 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.906 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:53.906 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.906 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.906 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.906 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.163 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:54.163 15:59:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:22:54.727 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.727 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:54.727 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.727 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.727 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.727 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.727 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:54.727 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:54.985 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.550 00:22:55.550 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.550 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.550 15:59:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.808 { 00:22:55.808 "cntlid": 139, 00:22:55.808 "qid": 0, 00:22:55.808 "state": "enabled", 00:22:55.808 "thread": "nvmf_tgt_poll_group_000", 00:22:55.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:55.808 "listen_address": { 00:22:55.808 "trtype": "RDMA", 00:22:55.808 "adrfam": "IPv4", 00:22:55.808 "traddr": "192.168.100.8", 00:22:55.808 "trsvcid": "4420" 00:22:55.808 }, 00:22:55.808 "peer_address": { 00:22:55.808 "trtype": "RDMA", 00:22:55.808 "adrfam": "IPv4", 00:22:55.808 "traddr": "192.168.100.8", 00:22:55.808 "trsvcid": "54274" 00:22:55.808 }, 00:22:55.808 "auth": { 00:22:55.808 "state": "completed", 00:22:55.808 "digest": "sha512", 00:22:55.808 "dhgroup": "ffdhe8192" 00:22:55.808 } 00:22:55.808 } 00:22:55.808 ]' 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.808 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.066 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:56.066 15:59:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: --dhchap-ctrl-secret DHHC-1:02:NmRmNjhkNjljZTkwMjg0OWVlZGE0MmZmMGFhY2UxOWQyYTRmODM0MGE4ZDI4ZDlkmNsrwg==: 00:22:56.631 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:56.888 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:56.889 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:56.889 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.889 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.889 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.889 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.889 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.889 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.889 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:56.889 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.454 00:22:57.454 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.454 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.454 15:59:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.712 { 00:22:57.712 "cntlid": 141, 00:22:57.712 "qid": 0, 00:22:57.712 "state": "enabled", 00:22:57.712 "thread": "nvmf_tgt_poll_group_000", 00:22:57.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:57.712 "listen_address": { 00:22:57.712 "trtype": "RDMA", 00:22:57.712 "adrfam": "IPv4", 00:22:57.712 "traddr": "192.168.100.8", 00:22:57.712 "trsvcid": "4420" 00:22:57.712 }, 00:22:57.712 "peer_address": { 00:22:57.712 "trtype": "RDMA", 00:22:57.712 "adrfam": "IPv4", 00:22:57.712 "traddr": "192.168.100.8", 00:22:57.712 "trsvcid": "60973" 00:22:57.712 }, 00:22:57.712 "auth": { 00:22:57.712 "state": "completed", 00:22:57.712 "digest": "sha512", 00:22:57.712 "dhgroup": "ffdhe8192" 00:22:57.712 } 00:22:57.712 } 00:22:57.712 ]' 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.712 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.969 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:57.969 15:59:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:01:NmNhNDdjYTdmMGM3MDdlZThiNzMwNmIzODE4ODNkYzHA21Kd: 00:22:58.534 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.796 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:58.796 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.796 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.796 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.796 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.796 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:58.796 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.054 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.311 00:22:59.311 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.311 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.568 15:59:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.568 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.568 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.568 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.568 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.568 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.568 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.568 { 00:22:59.568 "cntlid": 143, 00:22:59.568 "qid": 0, 00:22:59.568 "state": "enabled", 00:22:59.568 "thread": "nvmf_tgt_poll_group_000", 00:22:59.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:22:59.568 "listen_address": { 00:22:59.568 "trtype": "RDMA", 00:22:59.568 "adrfam": "IPv4", 00:22:59.568 "traddr": "192.168.100.8", 00:22:59.568 "trsvcid": "4420" 00:22:59.568 }, 00:22:59.568 "peer_address": { 00:22:59.568 "trtype": "RDMA", 00:22:59.568 "adrfam": "IPv4", 00:22:59.568 "traddr": "192.168.100.8", 00:22:59.568 "trsvcid": "49072" 00:22:59.568 }, 00:22:59.568 "auth": { 00:22:59.568 "state": "completed", 00:22:59.568 "digest": "sha512", 00:22:59.568 "dhgroup": "ffdhe8192" 00:22:59.568 } 00:22:59.568 } 00:22:59.568 ]' 00:22:59.568 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.568 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.568 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.826 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:59.826 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.826 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.826 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.826 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.083 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:23:00.083 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:23:00.648 15:59:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:00.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:00.648 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:00.648 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.648 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.648 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.648 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:00.648 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:00.648 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:00.649 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:00.649 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:00.649 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:00.907 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.473 00:23:01.473 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.473 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.473 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.473 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.473 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.473 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.473 15:59:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.473 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.473 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.473 { 00:23:01.473 "cntlid": 145, 00:23:01.473 "qid": 0, 00:23:01.473 "state": "enabled", 00:23:01.473 "thread": "nvmf_tgt_poll_group_000", 00:23:01.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:01.473 "listen_address": { 00:23:01.473 "trtype": "RDMA", 00:23:01.473 "adrfam": "IPv4", 00:23:01.473 "traddr": "192.168.100.8", 00:23:01.473 "trsvcid": "4420" 00:23:01.473 }, 00:23:01.473 "peer_address": { 00:23:01.473 "trtype": "RDMA", 00:23:01.473 "adrfam": "IPv4", 00:23:01.473 "traddr": "192.168.100.8", 00:23:01.473 "trsvcid": "49574" 00:23:01.473 }, 00:23:01.473 "auth": { 00:23:01.473 "state": "completed", 00:23:01.473 "digest": "sha512", 00:23:01.473 "dhgroup": "ffdhe8192" 00:23:01.473 } 00:23:01.474 } 00:23:01.474 ]' 00:23:01.474 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.732 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.732 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.732 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:01.732 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.732 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.732 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.732 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:01.991 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:23:01.991 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:00:NmViNDhjNDJiNDVjYjA4MjFlZTliZWY5NDM5MDNhYTM5ZDcyMmY5NGM0MWJjNDY1JzEMUQ==: --dhchap-ctrl-secret DHHC-1:03:MmRjMWYxMGIxNzg2NjJlYWJkNmU1ZmY2OTI3MmJlNzEyZGFlZjdjMDRiMDdkNGUzYTFlYzZkOTczYmMyOTlkNkIsm3Y=: 00:23:02.559 15:59:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:02.559 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:03.126 request: 00:23:03.126 { 00:23:03.126 "name": "nvme0", 00:23:03.126 "trtype": "rdma", 00:23:03.126 "traddr": "192.168.100.8", 00:23:03.126 "adrfam": "ipv4", 00:23:03.126 "trsvcid": "4420", 00:23:03.126 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:03.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:03.126 "prchk_reftag": false, 00:23:03.126 "prchk_guard": false, 00:23:03.126 "hdgst": false, 00:23:03.126 "ddgst": false, 00:23:03.126 "dhchap_key": "key2", 00:23:03.126 "allow_unrecognized_csi": false, 00:23:03.126 "method": "bdev_nvme_attach_controller", 00:23:03.126 "req_id": 1 00:23:03.126 } 00:23:03.126 Got JSON-RPC error response 00:23:03.126 response: 00:23:03.126 { 00:23:03.126 "code": -5, 00:23:03.126 "message": "Input/output error" 00:23:03.126 } 00:23:03.126 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:03.126 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:03.126 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:03.126 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:03.126 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:03.126 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.126 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.126 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.126 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:03.127 15:59:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:03.693 request: 00:23:03.693 { 00:23:03.693 "name": "nvme0", 00:23:03.693 "trtype": "rdma", 00:23:03.693 "traddr": "192.168.100.8", 00:23:03.693 "adrfam": "ipv4", 00:23:03.693 "trsvcid": "4420", 00:23:03.693 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:03.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:03.693 "prchk_reftag": false, 00:23:03.693 "prchk_guard": false, 00:23:03.693 "hdgst": false, 00:23:03.693 "ddgst": false, 00:23:03.693 "dhchap_key": "key1", 00:23:03.693 "dhchap_ctrlr_key": "ckey2", 00:23:03.693 "allow_unrecognized_csi": false, 00:23:03.693 "method": "bdev_nvme_attach_controller", 00:23:03.693 "req_id": 1 00:23:03.693 } 00:23:03.693 Got JSON-RPC error response 00:23:03.693 response: 00:23:03.693 { 00:23:03.693 "code": -5, 00:23:03.693 "message": "Input/output error" 00:23:03.693 } 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.693 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.259 request: 00:23:04.259 { 00:23:04.259 "name": "nvme0", 00:23:04.259 "trtype": "rdma", 00:23:04.259 "traddr": "192.168.100.8", 00:23:04.259 "adrfam": "ipv4", 00:23:04.259 "trsvcid": "4420", 00:23:04.259 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:04.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:04.259 "prchk_reftag": false, 00:23:04.259 "prchk_guard": false, 00:23:04.259 "hdgst": false, 00:23:04.259 "ddgst": false, 00:23:04.259 "dhchap_key": "key1", 00:23:04.259 "dhchap_ctrlr_key": "ckey1", 00:23:04.259 "allow_unrecognized_csi": false, 00:23:04.259 "method": "bdev_nvme_attach_controller", 00:23:04.259 "req_id": 1 00:23:04.259 } 00:23:04.259 Got JSON-RPC error response 00:23:04.259 response: 00:23:04.259 { 00:23:04.259 "code": -5, 00:23:04.259 "message": "Input/output error" 00:23:04.259 } 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3349789 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3349789 ']' 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3349789 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3349789 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3349789' 00:23:04.259 killing process with pid 3349789 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3349789 00:23:04.259 15:59:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3349789 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3374199 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3374199 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3374199 ']' 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.635 15:59:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3374199 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3374199 ']' 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.571 15:59:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.571 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.571 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:06.571 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:06.571 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.571 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.829 null0 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.XyA 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.tWj ]] 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.tWj 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.5ZD 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Fqr ]] 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Fqr 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OI0 00:23:07.087 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.6kw ]] 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6kw 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8d2 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.088 15:59:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:08.021 nvme0n1 00:23:08.021 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.022 { 00:23:08.022 "cntlid": 1, 00:23:08.022 "qid": 0, 00:23:08.022 "state": "enabled", 00:23:08.022 "thread": "nvmf_tgt_poll_group_000", 00:23:08.022 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:08.022 "listen_address": { 00:23:08.022 "trtype": "RDMA", 00:23:08.022 "adrfam": "IPv4", 00:23:08.022 "traddr": "192.168.100.8", 00:23:08.022 "trsvcid": "4420" 00:23:08.022 }, 00:23:08.022 "peer_address": { 00:23:08.022 "trtype": "RDMA", 00:23:08.022 "adrfam": "IPv4", 00:23:08.022 "traddr": "192.168.100.8", 00:23:08.022 "trsvcid": "52769" 00:23:08.022 }, 00:23:08.022 "auth": { 00:23:08.022 "state": "completed", 00:23:08.022 "digest": "sha512", 00:23:08.022 "dhgroup": "ffdhe8192" 00:23:08.022 } 00:23:08.022 } 00:23:08.022 ]' 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:08.022 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.280 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.280 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.281 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.281 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:23:08.281 15:59:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:23:08.847 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.106 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:09.106 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.106 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.106 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.106 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key3 00:23:09.106 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.106 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.106 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.106 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:09.106 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:09.365 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:09.365 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:09.365 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:09.365 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:09.365 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.365 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:09.365 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.365 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:09.365 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.365 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.623 request: 00:23:09.623 { 00:23:09.623 "name": "nvme0", 00:23:09.623 "trtype": "rdma", 00:23:09.623 "traddr": "192.168.100.8", 00:23:09.623 "adrfam": "ipv4", 00:23:09.623 "trsvcid": "4420", 00:23:09.623 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:09.623 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:09.623 "prchk_reftag": false, 00:23:09.623 "prchk_guard": false, 00:23:09.623 "hdgst": false, 00:23:09.623 "ddgst": false, 00:23:09.623 "dhchap_key": "key3", 00:23:09.623 "allow_unrecognized_csi": false, 00:23:09.623 "method": "bdev_nvme_attach_controller", 00:23:09.623 "req_id": 1 00:23:09.623 } 00:23:09.623 Got JSON-RPC error response 00:23:09.623 response: 00:23:09.624 { 00:23:09.624 "code": -5, 00:23:09.624 "message": "Input/output error" 00:23:09.624 } 00:23:09.624 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:09.624 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:09.624 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:09.624 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:09.624 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:09.624 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:09.624 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:09.624 15:59:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:09.624 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:09.624 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:09.624 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:09.624 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:09.624 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.624 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:09.624 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:09.624 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:09.624 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.624 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.883 request: 00:23:09.883 { 00:23:09.883 "name": "nvme0", 00:23:09.883 "trtype": "rdma", 00:23:09.883 "traddr": "192.168.100.8", 00:23:09.883 "adrfam": "ipv4", 00:23:09.883 "trsvcid": "4420", 00:23:09.883 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:09.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:09.883 "prchk_reftag": false, 00:23:09.883 "prchk_guard": false, 00:23:09.883 "hdgst": false, 00:23:09.883 "ddgst": false, 00:23:09.883 "dhchap_key": "key3", 00:23:09.883 "allow_unrecognized_csi": false, 00:23:09.883 "method": "bdev_nvme_attach_controller", 00:23:09.883 "req_id": 1 00:23:09.883 } 00:23:09.883 Got JSON-RPC error response 00:23:09.883 response: 00:23:09.883 { 00:23:09.883 "code": -5, 00:23:09.883 "message": "Input/output error" 00:23:09.883 } 00:23:09.883 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:09.883 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:09.883 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:09.883 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:09.883 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:09.883 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:10.141 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:10.142 15:59:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:10.709 request: 00:23:10.709 { 00:23:10.709 "name": "nvme0", 00:23:10.709 "trtype": "rdma", 00:23:10.709 "traddr": "192.168.100.8", 00:23:10.709 "adrfam": "ipv4", 00:23:10.709 "trsvcid": "4420", 00:23:10.709 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:10.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:10.709 "prchk_reftag": false, 00:23:10.709 "prchk_guard": false, 00:23:10.709 "hdgst": false, 00:23:10.709 "ddgst": false, 00:23:10.709 "dhchap_key": "key0", 00:23:10.709 "dhchap_ctrlr_key": "key1", 00:23:10.709 "allow_unrecognized_csi": false, 00:23:10.709 "method": "bdev_nvme_attach_controller", 00:23:10.709 "req_id": 1 00:23:10.709 } 00:23:10.709 Got JSON-RPC error response 00:23:10.709 response: 00:23:10.709 { 00:23:10.709 "code": -5, 00:23:10.709 "message": "Input/output error" 00:23:10.709 } 00:23:10.709 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:10.709 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:10.709 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:10.709 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:10.709 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:10.709 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:10.709 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:10.709 nvme0n1 00:23:10.967 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:10.967 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:10.967 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:10.968 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.968 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:10.968 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.226 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 00:23:11.226 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.226 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.226 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.226 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:11.226 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:11.226 15:59:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:12.160 nvme0n1 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.160 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:12.418 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.418 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:23:12.418 15:59:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid 8013ee90-59d8-e711-906e-00163566263e -l 0 --dhchap-secret DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: --dhchap-ctrl-secret DHHC-1:03:NGE4MTU1OTM1NDhhYzE0NWQ3OTNkOWQ2NWY1YjcyMmNmMmMwZmQ2NTBhZWI3NzAwMTcyYWMzZGRhMjJkYTFmYrEKxkQ=: 00:23:12.984 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:12.984 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:12.984 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:12.984 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:12.984 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:12.984 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:12.984 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:12.984 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.984 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.243 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:13.243 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:13.243 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:13.244 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:13.244 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.244 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:13.244 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.244 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:13.244 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:13.244 15:59:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:13.810 request: 00:23:13.810 { 00:23:13.810 "name": "nvme0", 00:23:13.810 "trtype": "rdma", 00:23:13.810 "traddr": "192.168.100.8", 00:23:13.810 "adrfam": "ipv4", 00:23:13.810 "trsvcid": "4420", 00:23:13.810 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:13.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e", 00:23:13.810 "prchk_reftag": false, 00:23:13.810 "prchk_guard": false, 00:23:13.810 "hdgst": false, 00:23:13.810 "ddgst": false, 00:23:13.810 "dhchap_key": "key1", 00:23:13.810 "allow_unrecognized_csi": false, 00:23:13.810 "method": "bdev_nvme_attach_controller", 00:23:13.810 "req_id": 1 00:23:13.810 } 00:23:13.811 Got JSON-RPC error response 00:23:13.811 response: 00:23:13.811 { 00:23:13.811 "code": -5, 00:23:13.811 "message": "Input/output error" 00:23:13.811 } 00:23:13.811 15:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:13.811 15:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:13.811 15:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:13.811 15:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:13.811 15:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:13.811 15:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:13.811 15:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:14.376 nvme0n1 00:23:14.376 15:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:14.376 15:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:14.376 15:59:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.634 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.634 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.634 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.893 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:14.893 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.893 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.893 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.893 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:14.893 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:14.893 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:15.151 nvme0n1 00:23:15.151 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:15.151 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:15.151 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.410 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.410 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.410 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.668 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:15.668 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.668 16:00:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: '' 2s 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: ]] 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NDAyMzRkOTc5MDljN2RkZmFiNTA4NWVhNjRjNWI4ZGPzsbVM: 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:15.668 16:00:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: 2s 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: ]] 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YzNjOWY2ZjI4YjQ5YmMzZmU3MGE0ZTJlMTFmOGQwYTIzMTI4ZTEyYTg5MDcyOTQ0VVhKZQ==: 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:17.614 16:00:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:19.540 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:19.540 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:19.540 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:19.540 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:19.540 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:19.540 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:19.798 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:19.798 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:19.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:19.798 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:19.798 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.798 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.798 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.798 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:19.798 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:19.798 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:20.730 nvme0n1 00:23:20.731 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:20.731 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.731 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.731 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.731 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:20.731 16:00:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:20.988 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:20.988 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:20.988 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.246 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.246 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:21.246 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.246 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.246 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.246 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:21.246 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:21.504 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:21.504 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.504 16:00:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:21.504 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:22.068 request: 00:23:22.068 { 00:23:22.068 "name": "nvme0", 00:23:22.068 "dhchap_key": "key1", 00:23:22.068 "dhchap_ctrlr_key": "key3", 00:23:22.068 "method": "bdev_nvme_set_keys", 00:23:22.068 "req_id": 1 00:23:22.068 } 00:23:22.068 Got JSON-RPC error response 00:23:22.068 response: 00:23:22.068 { 00:23:22.068 "code": -13, 00:23:22.068 "message": "Permission denied" 00:23:22.068 } 00:23:22.068 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:22.068 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:22.068 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:22.068 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:22.068 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:22.068 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.068 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:22.325 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:22.325 16:00:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:23.255 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:23.255 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:23.255 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:23.512 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:23.512 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:23.512 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.512 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.512 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.512 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:23.512 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:23.512 16:00:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:24.076 nvme0n1 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:24.076 16:00:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:24.640 request: 00:23:24.640 { 00:23:24.640 "name": "nvme0", 00:23:24.640 "dhchap_key": "key2", 00:23:24.640 "dhchap_ctrlr_key": "key0", 00:23:24.640 "method": "bdev_nvme_set_keys", 00:23:24.640 "req_id": 1 00:23:24.640 } 00:23:24.640 Got JSON-RPC error response 00:23:24.640 response: 00:23:24.640 { 00:23:24.640 "code": -13, 00:23:24.640 "message": "Permission denied" 00:23:24.640 } 00:23:24.640 16:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:24.640 16:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:24.640 16:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:24.640 16:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:24.640 16:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:24.640 16:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:24.640 16:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.897 16:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:24.897 16:00:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:25.830 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:25.830 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:25.830 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3350064 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3350064 ']' 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3350064 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3350064 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3350064' 00:23:26.088 killing process with pid 3350064 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3350064 00:23:26.088 16:00:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3350064 00:23:28.615 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:28.615 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:28.615 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:28.615 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:23:28.615 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:23:28.615 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:28.615 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:28.615 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:23:28.615 rmmod nvme_rdma 00:23:28.615 rmmod nvme_fabrics 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3374199 ']' 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3374199 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3374199 ']' 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3374199 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3374199 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3374199' 00:23:28.616 killing process with pid 3374199 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3374199 00:23:28.616 16:00:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3374199 00:23:29.554 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:29.554 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:23:29.554 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.XyA /tmp/spdk.key-sha256.5ZD /tmp/spdk.key-sha384.OI0 /tmp/spdk.key-sha512.8d2 /tmp/spdk.key-sha512.tWj /tmp/spdk.key-sha384.Fqr /tmp/spdk.key-sha256.6kw '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:23:29.554 00:23:29.554 real 2m47.847s 00:23:29.554 user 6m21.778s 00:23:29.554 sys 0m24.538s 00:23:29.554 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.554 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.554 ************************************ 00:23:29.554 END TEST nvmf_auth_target 00:23:29.554 ************************************ 00:23:29.555 16:00:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:23:29.555 16:00:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:23:29.555 16:00:15 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:23:29.555 16:00:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:29.555 16:00:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.555 16:00:15 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:29.555 ************************************ 00:23:29.555 START TEST nvmf_fuzz 00:23:29.555 ************************************ 00:23:29.555 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:23:29.813 * Looking for test storage... 00:23:29.813 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:23:29.813 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:29.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.814 --rc genhtml_branch_coverage=1 00:23:29.814 --rc genhtml_function_coverage=1 00:23:29.814 --rc genhtml_legend=1 00:23:29.814 --rc geninfo_all_blocks=1 00:23:29.814 --rc geninfo_unexecuted_blocks=1 00:23:29.814 00:23:29.814 ' 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:29.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.814 --rc genhtml_branch_coverage=1 00:23:29.814 --rc genhtml_function_coverage=1 00:23:29.814 --rc genhtml_legend=1 00:23:29.814 --rc geninfo_all_blocks=1 00:23:29.814 --rc geninfo_unexecuted_blocks=1 00:23:29.814 00:23:29.814 ' 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:29.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.814 --rc genhtml_branch_coverage=1 00:23:29.814 --rc genhtml_function_coverage=1 00:23:29.814 --rc genhtml_legend=1 00:23:29.814 --rc geninfo_all_blocks=1 00:23:29.814 --rc geninfo_unexecuted_blocks=1 00:23:29.814 00:23:29.814 ' 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:29.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.814 --rc genhtml_branch_coverage=1 00:23:29.814 --rc genhtml_function_coverage=1 00:23:29.814 --rc genhtml_legend=1 00:23:29.814 --rc geninfo_all_blocks=1 00:23:29.814 --rc geninfo_unexecuted_blocks=1 00:23:29.814 00:23:29.814 ' 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:29.814 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:29.814 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:23:29.815 16:00:15 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.370 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:36.371 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:36.371 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:36.371 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:36.371 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # rdma_device_init 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # uname 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@66 -- # modprobe ib_cm 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@67 -- # modprobe ib_core 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@68 -- # modprobe ib_umad 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@70 -- # modprobe iw_cm 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@530 -- # allocate_nic_ips 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # get_rdma_if_list 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:23:36.371 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:23:36.372 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:36.372 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:36.372 altname enp217s0f0np0 00:23:36.372 altname ens818f0np0 00:23:36.372 inet 192.168.100.8/24 scope global mlx_0_0 00:23:36.372 valid_lft forever preferred_lft forever 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:23:36.372 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:36.372 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:36.372 altname enp217s0f1np1 00:23:36.372 altname ens818f1np1 00:23:36.372 inet 192.168.100.9/24 scope global mlx_0_1 00:23:36.372 valid_lft forever preferred_lft forever 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # get_rdma_if_list 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_0 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@108 -- # echo mlx_0_1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@109 -- # continue 2 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # awk '{print $4}' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # cut -d/ -f1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:23:36.372 192.168.100.9' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:23:36.372 192.168.100.9' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # head -n 1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:23:36.372 192.168.100.9' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # tail -n +2 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # head -n 1 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3382043 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3382043 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3382043 ']' 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:36.372 16:00:21 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:37.306 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.306 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:23:37.306 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:37.306 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.306 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.306 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.306 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:37.306 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.306 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.567 Malloc0 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:23:37.567 16:00:22 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:24:09.622 Fuzzing completed. Shutting down the fuzz application 00:24:09.622 00:24:09.622 Dumping successful admin opcodes: 00:24:09.622 8, 9, 10, 24, 00:24:09.622 Dumping successful io opcodes: 00:24:09.622 0, 9, 00:24:09.622 NS: 0x2000008f0ec0 I/O qp, Total commands completed: 804839, total successful commands: 4671, random_seed: 3850438144 00:24:09.622 NS: 0x2000008f0ec0 admin qp, Total commands completed: 116480, total successful commands: 950, random_seed: 2565504512 00:24:09.622 16:00:53 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:09.880 Fuzzing completed. Shutting down the fuzz application 00:24:09.880 00:24:09.880 Dumping successful admin opcodes: 00:24:09.880 24, 00:24:09.880 Dumping successful io opcodes: 00:24:09.880 00:24:09.880 NS: 0x2000008f0ec0 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2658378040 00:24:09.880 NS: 0x2000008f0ec0 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2658472614 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:24:09.880 rmmod nvme_rdma 00:24:09.880 rmmod nvme_fabrics 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 3382043 ']' 00:24:09.880 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 3382043 00:24:09.881 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3382043 ']' 00:24:09.881 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 3382043 00:24:09.881 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:24:09.881 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.881 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3382043 00:24:09.881 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:09.881 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:09.881 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3382043' 00:24:09.881 killing process with pid 3382043 00:24:09.881 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 3382043 00:24:09.881 16:00:55 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 3382043 00:24:11.255 16:00:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.255 16:00:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:24:11.255 16:00:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:11.513 00:24:11.513 real 0m41.708s 00:24:11.513 user 0m54.532s 00:24:11.513 sys 0m19.270s 00:24:11.513 16:00:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.513 16:00:56 nvmf_rdma.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:11.513 ************************************ 00:24:11.513 END TEST nvmf_fuzz 00:24:11.513 ************************************ 00:24:11.513 16:00:56 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:24:11.513 16:00:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:11.513 16:00:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.513 16:00:56 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:11.513 ************************************ 00:24:11.513 START TEST nvmf_multiconnection 00:24:11.513 ************************************ 00:24:11.513 16:00:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:24:11.513 * Looking for test storage... 00:24:11.513 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:24:11.513 16:00:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:11.513 16:00:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:24:11.513 16:00:56 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.513 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:11.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.773 --rc genhtml_branch_coverage=1 00:24:11.773 --rc genhtml_function_coverage=1 00:24:11.773 --rc genhtml_legend=1 00:24:11.773 --rc geninfo_all_blocks=1 00:24:11.773 --rc geninfo_unexecuted_blocks=1 00:24:11.773 00:24:11.773 ' 00:24:11.773 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:11.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.773 --rc genhtml_branch_coverage=1 00:24:11.773 --rc genhtml_function_coverage=1 00:24:11.774 --rc genhtml_legend=1 00:24:11.774 --rc geninfo_all_blocks=1 00:24:11.774 --rc geninfo_unexecuted_blocks=1 00:24:11.774 00:24:11.774 ' 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:11.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.774 --rc genhtml_branch_coverage=1 00:24:11.774 --rc genhtml_function_coverage=1 00:24:11.774 --rc genhtml_legend=1 00:24:11.774 --rc geninfo_all_blocks=1 00:24:11.774 --rc geninfo_unexecuted_blocks=1 00:24:11.774 00:24:11.774 ' 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:11.774 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.774 --rc genhtml_branch_coverage=1 00:24:11.774 --rc genhtml_function_coverage=1 00:24:11.774 --rc genhtml_legend=1 00:24:11.774 --rc geninfo_all_blocks=1 00:24:11.774 --rc geninfo_unexecuted_blocks=1 00:24:11.774 00:24:11.774 ' 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.774 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.774 16:00:57 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:19.882 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:19.882 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:19.882 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:19.882 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # rdma_device_init 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # uname 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@66 -- # modprobe ib_cm 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@67 -- # modprobe ib_core 00:24:19.882 16:01:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@68 -- # modprobe ib_umad 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@70 -- # modprobe iw_cm 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@530 -- # allocate_nic_ips 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # get_rdma_if_list 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:24:19.883 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:19.883 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:19.883 altname enp217s0f0np0 00:24:19.883 altname ens818f0np0 00:24:19.883 inet 192.168.100.8/24 scope global mlx_0_0 00:24:19.883 valid_lft forever preferred_lft forever 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:24:19.883 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:19.883 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:19.883 altname enp217s0f1np1 00:24:19.883 altname ens818f1np1 00:24:19.883 inet 192.168.100.9/24 scope global mlx_0_1 00:24:19.883 valid_lft forever preferred_lft forever 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # get_rdma_if_list 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_0 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@108 -- # echo mlx_0_1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@109 -- # continue 2 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # cut -d/ -f1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # awk '{print $4}' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:24:19.883 192.168.100.9' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:24:19.883 192.168.100.9' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # head -n 1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:24:19.883 192.168.100.9' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # head -n 1 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # tail -n +2 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=3391287 00:24:19.883 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 3391287 00:24:19.884 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 3391287 ']' 00:24:19.884 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.884 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.884 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.884 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.884 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.884 16:01:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.884 [2024-11-17 16:01:04.326019] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:19.884 [2024-11-17 16:01:04.326119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.884 [2024-11-17 16:01:04.459075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.884 [2024-11-17 16:01:04.571999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.884 [2024-11-17 16:01:04.572045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.884 [2024-11-17 16:01:04.572058] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.884 [2024-11-17 16:01:04.572073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.884 [2024-11-17 16:01:04.572083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.884 [2024-11-17 16:01:04.574713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.884 [2024-11-17 16:01:04.574787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.884 [2024-11-17 16:01:04.574847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.884 [2024-11-17 16:01:04.574857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.884 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.884 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:24:19.884 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:19.884 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:19.884 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.884 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.884 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:19.884 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.884 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:19.884 [2024-11-17 16:01:05.195818] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f8ac1da6940) succeed. 00:24:19.884 [2024-11-17 16:01:05.205189] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f8ac1d62940) succeed. 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.146 Malloc1 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.146 [2024-11-17 16:01:05.586584] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.146 Malloc2 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.146 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.433 Malloc3 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.433 Malloc4 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.433 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.434 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:20.434 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.434 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.714 Malloc5 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.714 16:01:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.714 Malloc6 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.714 Malloc7 00:24:20.714 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.715 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 Malloc8 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 Malloc9 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 Malloc10 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.973 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.231 Malloc11 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:21.231 16:01:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:24:22.163 16:01:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:22.163 16:01:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:22.163 16:01:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:22.163 16:01:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:22.163 16:01:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:24.059 16:01:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:24.059 16:01:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:24.059 16:01:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:24:24.059 16:01:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:24.059 16:01:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:24.059 16:01:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:24.059 16:01:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:24.059 16:01:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:24:25.431 16:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:25.431 16:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:25.431 16:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:25.431 16:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:25.431 16:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:27.328 16:01:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:27.328 16:01:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:27.328 16:01:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:24:27.328 16:01:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:27.328 16:01:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:27.328 16:01:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:27.328 16:01:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:27.328 16:01:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:24:28.259 16:01:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:28.259 16:01:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:28.259 16:01:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:28.259 16:01:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:28.259 16:01:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:30.155 16:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:30.155 16:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:30.155 16:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:24:30.155 16:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:30.155 16:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:30.155 16:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:30.155 16:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:30.155 16:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:24:31.088 16:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:31.088 16:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:31.088 16:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:31.088 16:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:31.088 16:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:33.613 16:01:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:33.613 16:01:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:33.613 16:01:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:24:33.613 16:01:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:33.613 16:01:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:33.613 16:01:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:33.613 16:01:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:33.613 16:01:18 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:24:34.177 16:01:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:34.177 16:01:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:34.177 16:01:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:34.177 16:01:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:34.177 16:01:19 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:36.072 16:01:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:36.072 16:01:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:36.072 16:01:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:24:36.329 16:01:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:36.329 16:01:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:36.329 16:01:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:36.329 16:01:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.329 16:01:21 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:24:37.261 16:01:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:37.261 16:01:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:37.261 16:01:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:37.261 16:01:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:37.261 16:01:22 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:39.153 16:01:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:39.153 16:01:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:39.153 16:01:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:24:39.153 16:01:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:39.153 16:01:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:39.153 16:01:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:39.153 16:01:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:39.153 16:01:24 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:24:40.524 16:01:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:40.524 16:01:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:40.524 16:01:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:40.524 16:01:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:40.524 16:01:25 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:42.422 16:01:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:42.422 16:01:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:42.422 16:01:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:24:42.422 16:01:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:42.422 16:01:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:42.422 16:01:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:42.422 16:01:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.422 16:01:27 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:24:43.356 16:01:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:43.356 16:01:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:43.356 16:01:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:43.356 16:01:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:43.356 16:01:28 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:45.257 16:01:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:45.257 16:01:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:45.257 16:01:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:24:45.257 16:01:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:45.257 16:01:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.257 16:01:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:45.257 16:01:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:45.257 16:01:30 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:24:46.193 16:01:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:46.193 16:01:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:46.193 16:01:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.193 16:01:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:46.193 16:01:31 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:48.727 16:01:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:48.727 16:01:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:48.727 16:01:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:24:48.727 16:01:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:48.727 16:01:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.727 16:01:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:48.727 16:01:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.727 16:01:33 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:24:49.295 16:01:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:49.295 16:01:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:49.295 16:01:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:49.295 16:01:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:49.295 16:01:34 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:51.199 16:01:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:51.199 16:01:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:51.199 16:01:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:24:51.199 16:01:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:51.199 16:01:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:51.199 16:01:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:51.199 16:01:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.199 16:01:36 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:24:52.645 16:01:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:52.645 16:01:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:24:52.645 16:01:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.645 16:01:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:24:52.645 16:01:37 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:24:54.546 16:01:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:24:54.546 16:01:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:24:54.546 16:01:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:24:54.546 16:01:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:24:54.546 16:01:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:24:54.546 16:01:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:24:54.546 16:01:39 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:54.546 [global] 00:24:54.546 thread=1 00:24:54.546 invalidate=1 00:24:54.546 rw=read 00:24:54.546 time_based=1 00:24:54.546 runtime=10 00:24:54.546 ioengine=libaio 00:24:54.546 direct=1 00:24:54.546 bs=262144 00:24:54.546 iodepth=64 00:24:54.546 norandommap=1 00:24:54.546 numjobs=1 00:24:54.546 00:24:54.546 [job0] 00:24:54.546 filename=/dev/nvme0n1 00:24:54.546 [job1] 00:24:54.546 filename=/dev/nvme10n1 00:24:54.546 [job2] 00:24:54.546 filename=/dev/nvme1n1 00:24:54.546 [job3] 00:24:54.546 filename=/dev/nvme2n1 00:24:54.546 [job4] 00:24:54.546 filename=/dev/nvme3n1 00:24:54.546 [job5] 00:24:54.546 filename=/dev/nvme4n1 00:24:54.546 [job6] 00:24:54.546 filename=/dev/nvme5n1 00:24:54.546 [job7] 00:24:54.546 filename=/dev/nvme6n1 00:24:54.546 [job8] 00:24:54.546 filename=/dev/nvme7n1 00:24:54.546 [job9] 00:24:54.546 filename=/dev/nvme8n1 00:24:54.546 [job10] 00:24:54.546 filename=/dev/nvme9n1 00:24:54.546 Could not set queue depth (nvme0n1) 00:24:54.546 Could not set queue depth (nvme10n1) 00:24:54.546 Could not set queue depth (nvme1n1) 00:24:54.546 Could not set queue depth (nvme2n1) 00:24:54.546 Could not set queue depth (nvme3n1) 00:24:54.546 Could not set queue depth (nvme4n1) 00:24:54.546 Could not set queue depth (nvme5n1) 00:24:54.546 Could not set queue depth (nvme6n1) 00:24:54.546 Could not set queue depth (nvme7n1) 00:24:54.546 Could not set queue depth (nvme8n1) 00:24:54.546 Could not set queue depth (nvme9n1) 00:24:54.805 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.805 fio-3.35 00:24:54.805 Starting 11 threads 00:25:07.013 00:25:07.013 job0: (groupid=0, jobs=1): err= 0: pid=3397774: Sun Nov 17 16:01:50 2024 00:25:07.013 read: IOPS=845, BW=211MiB/s (222MB/s)(2129MiB/10070msec) 00:25:07.013 slat (usec): min=18, max=48945, avg=1169.84, stdev=3324.40 00:25:07.013 clat (msec): min=9, max=176, avg=74.45, stdev=16.47 00:25:07.013 lat (msec): min=9, max=176, avg=75.62, stdev=16.95 00:25:07.013 clat percentiles (msec): 00:25:07.013 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 68], 00:25:07.013 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:25:07.013 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 94], 95.00th=[ 108], 00:25:07.013 | 99.00th=[ 115], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 171], 00:25:07.013 | 99.99th=[ 178] 00:25:07.013 bw ( KiB/s): min=139776, max=304128, per=5.89%, avg=216345.60, stdev=41144.67, samples=20 00:25:07.013 iops : min= 546, max= 1188, avg=845.10, stdev=160.72, samples=20 00:25:07.013 lat (msec) : 10=0.05%, 20=0.34%, 50=0.53%, 100=90.09%, 250=9.00% 00:25:07.013 cpu : usr=0.39%, sys=4.31%, ctx=1608, majf=0, minf=3659 00:25:07.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:07.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.013 issued rwts: total=8514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.013 job1: (groupid=0, jobs=1): err= 0: pid=3397785: Sun Nov 17 16:01:50 2024 00:25:07.013 read: IOPS=947, BW=237MiB/s (248MB/s)(2386MiB/10069msec) 00:25:07.013 slat (usec): min=12, max=46392, avg=1021.89, stdev=3078.25 00:25:07.013 clat (msec): min=12, max=174, avg=66.44, stdev=20.17 00:25:07.013 lat (msec): min=13, max=174, avg=67.46, stdev=20.64 00:25:07.013 clat percentiles (msec): 00:25:07.013 | 1.00th=[ 41], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:25:07.013 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 55], 60.00th=[ 59], 00:25:07.013 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 93], 95.00th=[ 108], 00:25:07.013 | 99.00th=[ 113], 99.50th=[ 136], 99.90th=[ 174], 99.95th=[ 176], 00:25:07.013 | 99.99th=[ 176] 00:25:07.013 bw ( KiB/s): min=137216, max=307200, per=6.60%, avg=242692.55, stdev=63283.93, samples=20 00:25:07.013 iops : min= 536, max= 1200, avg=948.00, stdev=247.19, samples=20 00:25:07.013 lat (msec) : 20=0.44%, 50=1.92%, 100=89.56%, 250=8.08% 00:25:07.013 cpu : usr=0.47%, sys=4.69%, ctx=1965, majf=0, minf=4097 00:25:07.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:07.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.013 issued rwts: total=9542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.013 job2: (groupid=0, jobs=1): err= 0: pid=3397791: Sun Nov 17 16:01:50 2024 00:25:07.013 read: IOPS=1443, BW=361MiB/s (378MB/s)(3632MiB/10066msec) 00:25:07.013 slat (usec): min=11, max=53349, avg=677.79, stdev=2297.74 00:25:07.013 clat (msec): min=12, max=149, avg=43.62, stdev=19.60 00:25:07.013 lat (msec): min=12, max=164, avg=44.30, stdev=19.99 00:25:07.013 clat percentiles (msec): 00:25:07.013 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:25:07.013 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:25:07.013 | 70.00th=[ 38], 80.00th=[ 53], 90.00th=[ 70], 95.00th=[ 104], 00:25:07.013 | 99.00th=[ 111], 99.50th=[ 115], 99.90th=[ 142], 99.95th=[ 146], 00:25:07.013 | 99.99th=[ 150] 00:25:07.013 bw ( KiB/s): min=137728, max=475136, per=10.08%, avg=370334.40, stdev=126499.71, samples=20 00:25:07.013 iops : min= 538, max= 1856, avg=1446.60, stdev=494.15, samples=20 00:25:07.013 lat (msec) : 20=0.37%, 50=73.14%, 100=21.36%, 250=5.13% 00:25:07.013 cpu : usr=0.62%, sys=5.54%, ctx=2978, majf=0, minf=4097 00:25:07.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:07.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.013 issued rwts: total=14529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.013 job3: (groupid=0, jobs=1): err= 0: pid=3397795: Sun Nov 17 16:01:50 2024 00:25:07.013 read: IOPS=848, BW=212MiB/s (222MB/s)(2133MiB/10053msec) 00:25:07.013 slat (usec): min=17, max=27069, avg=1148.34, stdev=2935.57 00:25:07.013 clat (msec): min=13, max=158, avg=74.18, stdev=16.03 00:25:07.013 lat (msec): min=14, max=158, avg=75.32, stdev=16.46 00:25:07.013 clat percentiles (msec): 00:25:07.013 | 1.00th=[ 51], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 69], 00:25:07.013 | 30.00th=[ 70], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 73], 00:25:07.013 | 70.00th=[ 75], 80.00th=[ 88], 90.00th=[ 95], 95.00th=[ 108], 00:25:07.013 | 99.00th=[ 115], 99.50th=[ 120], 99.90th=[ 155], 99.95th=[ 155], 00:25:07.013 | 99.99th=[ 159] 00:25:07.013 bw ( KiB/s): min=148480, max=309760, per=5.90%, avg=216828.55, stdev=40151.77, samples=20 00:25:07.013 iops : min= 580, max= 1210, avg=846.95, stdev=156.84, samples=20 00:25:07.013 lat (msec) : 20=0.42%, 50=0.59%, 100=90.01%, 250=8.98% 00:25:07.013 cpu : usr=0.35%, sys=4.24%, ctx=1703, majf=0, minf=4097 00:25:07.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:07.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.013 issued rwts: total=8532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.013 job4: (groupid=0, jobs=1): err= 0: pid=3397796: Sun Nov 17 16:01:50 2024 00:25:07.013 read: IOPS=819, BW=205MiB/s (215MB/s)(2064MiB/10069msec) 00:25:07.013 slat (usec): min=12, max=30100, avg=1189.52, stdev=3226.67 00:25:07.013 clat (msec): min=13, max=158, avg=76.80, stdev=15.82 00:25:07.013 lat (msec): min=13, max=158, avg=77.99, stdev=16.30 00:25:07.013 clat percentiles (msec): 00:25:07.013 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 70], 00:25:07.013 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 75], 00:25:07.013 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 99], 95.00th=[ 108], 00:25:07.013 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 144], 00:25:07.013 | 99.99th=[ 159] 00:25:07.013 bw ( KiB/s): min=149504, max=309760, per=5.71%, avg=209664.00, stdev=38064.75, samples=20 00:25:07.013 iops : min= 584, max= 1210, avg=819.00, stdev=148.69, samples=20 00:25:07.013 lat (msec) : 20=0.29%, 50=1.83%, 100=88.15%, 250=9.73% 00:25:07.013 cpu : usr=0.39%, sys=4.08%, ctx=1694, majf=0, minf=4097 00:25:07.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:07.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.013 issued rwts: total=8254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.013 job5: (groupid=0, jobs=1): err= 0: pid=3397797: Sun Nov 17 16:01:50 2024 00:25:07.013 read: IOPS=1681, BW=420MiB/s (441MB/s)(4217MiB/10030msec) 00:25:07.013 slat (usec): min=11, max=20464, avg=589.21, stdev=1611.23 00:25:07.013 clat (usec): min=13006, max=70977, avg=37427.73, stdev=15085.93 00:25:07.013 lat (usec): min=13261, max=72623, avg=38016.94, stdev=15370.98 00:25:07.013 clat percentiles (usec): 00:25:07.013 | 1.00th=[14353], 5.00th=[15664], 10.00th=[16188], 20.00th=[16712], 00:25:07.013 | 30.00th=[33162], 40.00th=[34866], 50.00th=[35914], 60.00th=[46400], 00:25:07.013 | 70.00th=[51643], 80.00th=[52691], 90.00th=[54264], 95.00th=[55837], 00:25:07.013 | 99.00th=[59507], 99.50th=[62129], 99.90th=[68682], 99.95th=[69731], 00:25:07.013 | 99.99th=[70779] 00:25:07.013 bw ( KiB/s): min=298581, max=992256, per=11.71%, avg=430237.85, stdev=202428.26, samples=20 00:25:07.013 iops : min= 1166, max= 3876, avg=1680.60, stdev=790.75, samples=20 00:25:07.013 lat (msec) : 20=26.45%, 50=34.47%, 100=39.08% 00:25:07.013 cpu : usr=0.70%, sys=6.54%, ctx=3019, majf=0, minf=4097 00:25:07.013 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:07.013 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.014 issued rwts: total=16868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.014 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.014 job6: (groupid=0, jobs=1): err= 0: pid=3397798: Sun Nov 17 16:01:50 2024 00:25:07.014 read: IOPS=1274, BW=319MiB/s (334MB/s)(3209MiB/10068msec) 00:25:07.014 slat (usec): min=13, max=67755, avg=773.13, stdev=2477.78 00:25:07.014 clat (msec): min=12, max=193, avg=49.38, stdev=17.86 00:25:07.014 lat (msec): min=13, max=193, avg=50.15, stdev=18.24 00:25:07.014 clat percentiles (msec): 00:25:07.014 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 36], 00:25:07.014 | 30.00th=[ 36], 40.00th=[ 41], 50.00th=[ 52], 60.00th=[ 53], 00:25:07.014 | 70.00th=[ 54], 80.00th=[ 55], 90.00th=[ 58], 95.00th=[ 106], 00:25:07.014 | 99.00th=[ 112], 99.50th=[ 113], 99.90th=[ 161], 99.95th=[ 165], 00:25:07.014 | 99.99th=[ 178] 00:25:07.014 bw ( KiB/s): min=140800, max=459264, per=8.90%, avg=326912.00, stdev=91431.71, samples=20 00:25:07.014 iops : min= 550, max= 1794, avg=1277.00, stdev=357.16, samples=20 00:25:07.014 lat (msec) : 20=0.29%, 50=42.21%, 100=51.70%, 250=5.80% 00:25:07.014 cpu : usr=0.45%, sys=6.25%, ctx=2456, majf=0, minf=4097 00:25:07.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:25:07.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.014 issued rwts: total=12834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.014 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.014 job7: (groupid=0, jobs=1): err= 0: pid=3397799: Sun Nov 17 16:01:50 2024 00:25:07.014 read: IOPS=2325, BW=581MiB/s (610MB/s)(5831MiB/10029msec) 00:25:07.014 slat (usec): min=11, max=9889, avg=426.40, stdev=1025.85 00:25:07.014 clat (usec): min=2243, max=59279, avg=27065.17, stdev=8575.86 00:25:07.014 lat (usec): min=2408, max=59311, avg=27491.57, stdev=8733.73 00:25:07.014 clat percentiles (usec): 00:25:07.014 | 1.00th=[15270], 5.00th=[16057], 10.00th=[16450], 20.00th=[17171], 00:25:07.014 | 30.00th=[17695], 40.00th=[19006], 50.00th=[32900], 60.00th=[33424], 00:25:07.014 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[35914], 00:25:07.014 | 99.00th=[39060], 99.50th=[41157], 99.90th=[51119], 99.95th=[54789], 00:25:07.014 | 99.99th=[58983] 00:25:07.014 bw ( KiB/s): min=460288, max=949760, per=16.21%, avg=595524.05, stdev=191806.09, samples=20 00:25:07.014 iops : min= 1798, max= 3710, avg=2326.25, stdev=749.21, samples=20 00:25:07.014 lat (msec) : 4=0.06%, 10=0.22%, 20=40.19%, 50=59.40%, 100=0.14% 00:25:07.014 cpu : usr=0.53%, sys=7.19%, ctx=4596, majf=0, minf=4097 00:25:07.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:25:07.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.014 issued rwts: total=23322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.014 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.014 job8: (groupid=0, jobs=1): err= 0: pid=3397800: Sun Nov 17 16:01:50 2024 00:25:07.014 read: IOPS=1573, BW=393MiB/s (413MB/s)(3947MiB/10032msec) 00:25:07.014 slat (usec): min=12, max=29288, avg=617.96, stdev=1751.44 00:25:07.014 clat (msec): min=12, max=114, avg=40.01, stdev=13.27 00:25:07.014 lat (msec): min=12, max=116, avg=40.62, stdev=13.55 00:25:07.014 clat percentiles (msec): 00:25:07.014 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:25:07.014 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 36], 00:25:07.014 | 70.00th=[ 37], 80.00th=[ 41], 90.00th=[ 54], 95.00th=[ 84], 00:25:07.014 | 99.00th=[ 89], 99.50th=[ 91], 99.90th=[ 108], 99.95th=[ 112], 00:25:07.014 | 99.99th=[ 115] 00:25:07.014 bw ( KiB/s): min=180736, max=470528, per=10.96%, avg=402564.90, stdev=96264.38, samples=20 00:25:07.014 iops : min= 706, max= 1838, avg=1572.50, stdev=376.05, samples=20 00:25:07.014 lat (msec) : 20=0.43%, 50=82.93%, 100=16.51%, 250=0.13% 00:25:07.014 cpu : usr=0.58%, sys=6.81%, ctx=2993, majf=0, minf=4097 00:25:07.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:07.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.014 issued rwts: total=15787,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.014 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.014 job9: (groupid=0, jobs=1): err= 0: pid=3397801: Sun Nov 17 16:01:50 2024 00:25:07.014 read: IOPS=1599, BW=400MiB/s (419MB/s)(4010MiB/10024msec) 00:25:07.014 slat (usec): min=12, max=24087, avg=619.84, stdev=1672.65 00:25:07.014 clat (msec): min=12, max=107, avg=39.34, stdev=13.42 00:25:07.014 lat (msec): min=12, max=111, avg=39.96, stdev=13.69 00:25:07.014 clat percentiles (msec): 00:25:07.014 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:25:07.014 | 30.00th=[ 35], 40.00th=[ 35], 50.00th=[ 35], 60.00th=[ 36], 00:25:07.014 | 70.00th=[ 37], 80.00th=[ 40], 90.00th=[ 53], 95.00th=[ 84], 00:25:07.014 | 99.00th=[ 89], 99.50th=[ 91], 99.90th=[ 104], 99.95th=[ 105], 00:25:07.014 | 99.99th=[ 108] 00:25:07.014 bw ( KiB/s): min=182784, max=551424, per=11.13%, avg=408990.35, stdev=101748.69, samples=20 00:25:07.014 iops : min= 714, max= 2154, avg=1597.60, stdev=397.48, samples=20 00:25:07.014 lat (msec) : 20=2.15%, 50=81.70%, 100=16.02%, 250=0.12% 00:25:07.014 cpu : usr=0.61%, sys=6.70%, ctx=2885, majf=0, minf=4097 00:25:07.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:25:07.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.014 issued rwts: total=16038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.014 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.014 job10: (groupid=0, jobs=1): err= 0: pid=3397802: Sun Nov 17 16:01:50 2024 00:25:07.014 read: IOPS=1029, BW=257MiB/s (270MB/s)(2581MiB/10030msec) 00:25:07.014 slat (usec): min=10, max=29730, avg=938.53, stdev=2620.33 00:25:07.014 clat (msec): min=14, max=114, avg=61.17, stdev=20.93 00:25:07.014 lat (msec): min=14, max=120, avg=62.11, stdev=21.40 00:25:07.014 clat percentiles (msec): 00:25:07.014 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 35], 00:25:07.014 | 30.00th=[ 38], 40.00th=[ 60], 50.00th=[ 71], 60.00th=[ 71], 00:25:07.014 | 70.00th=[ 73], 80.00th=[ 78], 90.00th=[ 88], 95.00th=[ 90], 00:25:07.014 | 99.00th=[ 94], 99.50th=[ 95], 99.90th=[ 101], 99.95th=[ 106], 00:25:07.014 | 99.99th=[ 115] 00:25:07.014 bw ( KiB/s): min=174080, max=482304, per=7.15%, avg=262707.20, stdev=101656.50, samples=20 00:25:07.014 iops : min= 680, max= 1884, avg=1026.20, stdev=397.10, samples=20 00:25:07.014 lat (msec) : 20=0.43%, 50=31.97%, 100=67.48%, 250=0.13% 00:25:07.014 cpu : usr=0.50%, sys=4.52%, ctx=2208, majf=0, minf=4097 00:25:07.014 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:07.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:07.014 issued rwts: total=10325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.014 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:07.014 00:25:07.014 Run status group 0 (all jobs): 00:25:07.014 READ: bw=3589MiB/s (3763MB/s), 205MiB/s-581MiB/s (215MB/s-610MB/s), io=35.3GiB (37.9GB), run=10024-10070msec 00:25:07.014 00:25:07.014 Disk stats (read/write): 00:25:07.014 nvme0n1: ios=16781/0, merge=0/0, ticks=1219450/0, in_queue=1219450, util=96.85% 00:25:07.014 nvme10n1: ios=18833/0, merge=0/0, ticks=1219152/0, in_queue=1219152, util=97.09% 00:25:07.014 nvme1n1: ios=28805/0, merge=0/0, ticks=1216890/0, in_queue=1216890, util=97.40% 00:25:07.014 nvme2n1: ios=16745/0, merge=0/0, ticks=1218564/0, in_queue=1218564, util=97.61% 00:25:07.014 nvme3n1: ios=16254/0, merge=0/0, ticks=1221479/0, in_queue=1221479, util=97.72% 00:25:07.014 nvme4n1: ios=33207/0, merge=0/0, ticks=1222854/0, in_queue=1222854, util=98.09% 00:25:07.014 nvme5n1: ios=25422/0, merge=0/0, ticks=1219847/0, in_queue=1219847, util=98.28% 00:25:07.014 nvme6n1: ios=46131/0, merge=0/0, ticks=1215943/0, in_queue=1215943, util=98.41% 00:25:07.014 nvme7n1: ios=31070/0, merge=0/0, ticks=1222135/0, in_queue=1222135, util=98.89% 00:25:07.014 nvme8n1: ios=31224/0, merge=0/0, ticks=1223791/0, in_queue=1223791, util=99.12% 00:25:07.014 nvme9n1: ios=20122/0, merge=0/0, ticks=1224718/0, in_queue=1224718, util=99.25% 00:25:07.015 16:01:50 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:07.015 [global] 00:25:07.015 thread=1 00:25:07.015 invalidate=1 00:25:07.015 rw=randwrite 00:25:07.015 time_based=1 00:25:07.015 runtime=10 00:25:07.015 ioengine=libaio 00:25:07.015 direct=1 00:25:07.015 bs=262144 00:25:07.015 iodepth=64 00:25:07.015 norandommap=1 00:25:07.015 numjobs=1 00:25:07.015 00:25:07.015 [job0] 00:25:07.015 filename=/dev/nvme0n1 00:25:07.015 [job1] 00:25:07.015 filename=/dev/nvme10n1 00:25:07.015 [job2] 00:25:07.015 filename=/dev/nvme1n1 00:25:07.015 [job3] 00:25:07.015 filename=/dev/nvme2n1 00:25:07.015 [job4] 00:25:07.015 filename=/dev/nvme3n1 00:25:07.015 [job5] 00:25:07.015 filename=/dev/nvme4n1 00:25:07.015 [job6] 00:25:07.015 filename=/dev/nvme5n1 00:25:07.015 [job7] 00:25:07.015 filename=/dev/nvme6n1 00:25:07.015 [job8] 00:25:07.015 filename=/dev/nvme7n1 00:25:07.015 [job9] 00:25:07.015 filename=/dev/nvme8n1 00:25:07.015 [job10] 00:25:07.015 filename=/dev/nvme9n1 00:25:07.015 Could not set queue depth (nvme0n1) 00:25:07.015 Could not set queue depth (nvme10n1) 00:25:07.015 Could not set queue depth (nvme1n1) 00:25:07.015 Could not set queue depth (nvme2n1) 00:25:07.015 Could not set queue depth (nvme3n1) 00:25:07.015 Could not set queue depth (nvme4n1) 00:25:07.015 Could not set queue depth (nvme5n1) 00:25:07.015 Could not set queue depth (nvme6n1) 00:25:07.015 Could not set queue depth (nvme7n1) 00:25:07.015 Could not set queue depth (nvme8n1) 00:25:07.015 Could not set queue depth (nvme9n1) 00:25:07.015 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:07.015 fio-3.35 00:25:07.015 Starting 11 threads 00:25:16.995 00:25:16.995 job0: (groupid=0, jobs=1): err= 0: pid=3399493: Sun Nov 17 16:02:01 2024 00:25:16.995 write: IOPS=1059, BW=265MiB/s (278MB/s)(2660MiB/10043msec); 0 zone resets 00:25:16.995 slat (usec): min=23, max=10612, avg=912.02, stdev=1670.04 00:25:16.995 clat (usec): min=12347, max=99713, avg=59470.19, stdev=9209.20 00:25:16.995 lat (usec): min=12386, max=99761, avg=60382.20, stdev=9342.27 00:25:16.995 clat percentiles (msec): 00:25:16.995 | 1.00th=[ 37], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 58], 00:25:16.995 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:25:16.995 | 70.00th=[ 62], 80.00th=[ 63], 90.00th=[ 65], 95.00th=[ 79], 00:25:16.995 | 99.00th=[ 85], 99.50th=[ 87], 99.90th=[ 94], 99.95th=[ 96], 00:25:16.995 | 99.99th=[ 101] 00:25:16.995 bw ( KiB/s): min=198656, max=371200, per=8.35%, avg=270796.80, stdev=35133.52, samples=20 00:25:16.995 iops : min= 776, max= 1450, avg=1057.80, stdev=137.24, samples=20 00:25:16.995 lat (msec) : 20=0.22%, 50=11.80%, 100=87.98% 00:25:16.995 cpu : usr=2.60%, sys=4.19%, ctx=2661, majf=0, minf=1 00:25:16.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:16.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.995 issued rwts: total=0,10641,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.995 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.995 job1: (groupid=0, jobs=1): err= 0: pid=3399523: Sun Nov 17 16:02:01 2024 00:25:16.995 write: IOPS=1003, BW=251MiB/s (263MB/s)(2522MiB/10055msec); 0 zone resets 00:25:16.995 slat (usec): min=23, max=14676, avg=987.34, stdev=1950.72 00:25:16.995 clat (msec): min=12, max=135, avg=62.78, stdev= 8.60 00:25:16.995 lat (msec): min=12, max=135, avg=63.77, stdev= 8.80 00:25:16.995 clat percentiles (msec): 00:25:16.995 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 57], 20.00th=[ 58], 00:25:16.995 | 30.00th=[ 59], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:25:16.995 | 70.00th=[ 62], 80.00th=[ 65], 90.00th=[ 79], 95.00th=[ 81], 00:25:16.995 | 99.00th=[ 87], 99.50th=[ 91], 99.90th=[ 128], 99.95th=[ 131], 00:25:16.995 | 99.99th=[ 136] 00:25:16.995 bw ( KiB/s): min=195584, max=276480, per=7.92%, avg=256640.00, stdev=27726.17, samples=20 00:25:16.995 iops : min= 764, max= 1080, avg=1002.50, stdev=108.31, samples=20 00:25:16.995 lat (msec) : 20=0.12%, 50=0.37%, 100=99.20%, 250=0.32% 00:25:16.995 cpu : usr=2.27%, sys=3.91%, ctx=2263, majf=0, minf=1 00:25:16.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:16.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.995 issued rwts: total=0,10088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.995 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.995 job2: (groupid=0, jobs=1): err= 0: pid=3399537: Sun Nov 17 16:02:01 2024 00:25:16.995 write: IOPS=1044, BW=261MiB/s (274MB/s)(2627MiB/10059msec); 0 zone resets 00:25:16.995 slat (usec): min=24, max=27357, avg=928.55, stdev=1871.50 00:25:16.995 clat (msec): min=8, max=133, avg=60.33, stdev=11.25 00:25:16.995 lat (msec): min=8, max=133, avg=61.25, stdev=11.46 00:25:16.995 clat percentiles (msec): 00:25:16.995 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 57], 00:25:16.995 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:25:16.995 | 70.00th=[ 62], 80.00th=[ 64], 90.00th=[ 79], 95.00th=[ 81], 00:25:16.995 | 99.00th=[ 86], 99.50th=[ 92], 99.90th=[ 126], 99.95th=[ 127], 00:25:16.995 | 99.99th=[ 133] 00:25:16.995 bw ( KiB/s): min=197120, max=391680, per=8.25%, avg=267340.80, stdev=43514.86, samples=20 00:25:16.995 iops : min= 770, max= 1530, avg=1044.30, stdev=169.98, samples=20 00:25:16.995 lat (msec) : 10=0.07%, 20=0.14%, 50=12.36%, 100=87.18%, 250=0.25% 00:25:16.995 cpu : usr=2.18%, sys=4.19%, ctx=2571, majf=0, minf=1 00:25:16.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:16.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.995 issued rwts: total=0,10506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.995 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.995 job3: (groupid=0, jobs=1): err= 0: pid=3399542: Sun Nov 17 16:02:01 2024 00:25:16.995 write: IOPS=2356, BW=589MiB/s (618MB/s)(5927MiB/10059msec); 0 zone resets 00:25:16.995 slat (usec): min=15, max=17507, avg=410.45, stdev=941.40 00:25:16.995 clat (msec): min=8, max=133, avg=26.73, stdev=14.76 00:25:16.995 lat (msec): min=8, max=133, avg=27.14, stdev=14.98 00:25:16.995 clat percentiles (msec): 00:25:16.995 | 1.00th=[ 11], 5.00th=[ 19], 10.00th=[ 20], 20.00th=[ 20], 00:25:16.995 | 30.00th=[ 20], 40.00th=[ 21], 50.00th=[ 21], 60.00th=[ 21], 00:25:16.995 | 70.00th=[ 22], 80.00th=[ 39], 90.00th=[ 43], 95.00th=[ 60], 00:25:16.995 | 99.00th=[ 81], 99.50th=[ 83], 99.90th=[ 102], 99.95th=[ 122], 00:25:16.995 | 99.99th=[ 134] 00:25:16.995 bw ( KiB/s): min=199680, max=829952, per=18.67%, avg=605337.60, stdev=230116.11, samples=20 00:25:16.995 iops : min= 780, max= 3242, avg=2364.60, stdev=898.89, samples=20 00:25:16.995 lat (msec) : 10=0.91%, 20=36.00%, 50=54.90%, 100=8.08%, 250=0.11% 00:25:16.995 cpu : usr=3.43%, sys=5.58%, ctx=4833, majf=0, minf=1 00:25:16.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:25:16.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.995 issued rwts: total=0,23709,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.996 job4: (groupid=0, jobs=1): err= 0: pid=3399546: Sun Nov 17 16:02:01 2024 00:25:16.996 write: IOPS=1018, BW=255MiB/s (267MB/s)(2554MiB/10033msec); 0 zone resets 00:25:16.996 slat (usec): min=23, max=25662, avg=961.90, stdev=2124.90 00:25:16.996 clat (msec): min=11, max=101, avg=61.86, stdev=16.13 00:25:16.996 lat (msec): min=11, max=107, avg=62.82, stdev=16.45 00:25:16.996 clat percentiles (msec): 00:25:16.996 | 1.00th=[ 35], 5.00th=[ 38], 10.00th=[ 40], 20.00th=[ 41], 00:25:16.996 | 30.00th=[ 56], 40.00th=[ 58], 50.00th=[ 61], 60.00th=[ 74], 00:25:16.996 | 70.00th=[ 77], 80.00th=[ 78], 90.00th=[ 80], 95.00th=[ 82], 00:25:16.996 | 99.00th=[ 87], 99.50th=[ 90], 99.90th=[ 100], 99.95th=[ 101], 00:25:16.996 | 99.99th=[ 102] 00:25:16.996 bw ( KiB/s): min=203264, max=412672, per=8.02%, avg=259942.40, stdev=69312.08, samples=20 00:25:16.996 iops : min= 794, max= 1612, avg=1015.40, stdev=270.75, samples=20 00:25:16.996 lat (msec) : 20=0.21%, 50=26.44%, 100=73.33%, 250=0.03% 00:25:16.996 cpu : usr=2.31%, sys=4.27%, ctx=2409, majf=0, minf=1 00:25:16.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:16.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.996 issued rwts: total=0,10217,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.996 job5: (groupid=0, jobs=1): err= 0: pid=3399547: Sun Nov 17 16:02:01 2024 00:25:16.996 write: IOPS=1070, BW=268MiB/s (281MB/s)(2690MiB/10050msec); 0 zone resets 00:25:16.996 slat (usec): min=23, max=8849, avg=925.95, stdev=1698.43 00:25:16.996 clat (msec): min=3, max=109, avg=58.84, stdev=10.16 00:25:16.996 lat (msec): min=3, max=109, avg=59.76, stdev=10.29 00:25:16.996 clat percentiles (msec): 00:25:16.996 | 1.00th=[ 38], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 58], 00:25:16.996 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:25:16.996 | 70.00th=[ 63], 80.00th=[ 64], 90.00th=[ 65], 95.00th=[ 78], 00:25:16.996 | 99.00th=[ 84], 99.50th=[ 85], 99.90th=[ 99], 99.95th=[ 104], 00:25:16.996 | 99.99th=[ 110] 00:25:16.996 bw ( KiB/s): min=199168, max=397312, per=8.45%, avg=273843.20, stdev=40533.79, samples=20 00:25:16.996 iops : min= 778, max= 1552, avg=1069.70, stdev=158.34, samples=20 00:25:16.996 lat (msec) : 4=0.05%, 10=0.25%, 20=0.20%, 50=14.79%, 100=84.63% 00:25:16.996 lat (msec) : 250=0.08% 00:25:16.996 cpu : usr=2.15%, sys=3.41%, ctx=2534, majf=0, minf=1 00:25:16.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:16.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.996 issued rwts: total=0,10760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.996 job6: (groupid=0, jobs=1): err= 0: pid=3399548: Sun Nov 17 16:02:01 2024 00:25:16.996 write: IOPS=1004, BW=251MiB/s (263MB/s)(2527MiB/10058msec); 0 zone resets 00:25:16.996 slat (usec): min=23, max=13775, avg=984.84, stdev=1898.12 00:25:16.996 clat (msec): min=4, max=137, avg=62.69, stdev= 8.62 00:25:16.996 lat (msec): min=4, max=140, avg=63.67, stdev= 8.81 00:25:16.996 clat percentiles (msec): 00:25:16.996 | 1.00th=[ 55], 5.00th=[ 57], 10.00th=[ 57], 20.00th=[ 58], 00:25:16.996 | 30.00th=[ 59], 40.00th=[ 59], 50.00th=[ 60], 60.00th=[ 61], 00:25:16.996 | 70.00th=[ 62], 80.00th=[ 65], 90.00th=[ 79], 95.00th=[ 81], 00:25:16.996 | 99.00th=[ 86], 99.50th=[ 90], 99.90th=[ 127], 99.95th=[ 132], 00:25:16.996 | 99.99th=[ 138] 00:25:16.996 bw ( KiB/s): min=198656, max=276992, per=7.93%, avg=257100.80, stdev=27682.05, samples=20 00:25:16.996 iops : min= 776, max= 1082, avg=1004.30, stdev=108.13, samples=20 00:25:16.996 lat (msec) : 10=0.02%, 20=0.11%, 50=0.37%, 100=99.17%, 250=0.34% 00:25:16.996 cpu : usr=2.41%, sys=4.11%, ctx=2357, majf=0, minf=1 00:25:16.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:16.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.996 issued rwts: total=0,10106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.996 job7: (groupid=0, jobs=1): err= 0: pid=3399549: Sun Nov 17 16:02:01 2024 00:25:16.996 write: IOPS=926, BW=232MiB/s (243MB/s)(2325MiB/10034msec); 0 zone resets 00:25:16.996 slat (usec): min=25, max=28828, avg=1052.24, stdev=2264.86 00:25:16.996 clat (msec): min=12, max=106, avg=67.98, stdev=15.38 00:25:16.996 lat (msec): min=12, max=108, avg=69.03, stdev=15.67 00:25:16.996 clat percentiles (msec): 00:25:16.996 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 56], 00:25:16.996 | 30.00th=[ 63], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 78], 00:25:16.996 | 70.00th=[ 79], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 84], 00:25:16.996 | 99.00th=[ 88], 99.50th=[ 90], 99.90th=[ 104], 99.95th=[ 105], 00:25:16.996 | 99.99th=[ 107] 00:25:16.996 bw ( KiB/s): min=197120, max=406016, per=7.29%, avg=236467.20, stdev=58506.42, samples=20 00:25:16.996 iops : min= 770, max= 1586, avg=923.70, stdev=228.54, samples=20 00:25:16.996 lat (msec) : 20=0.12%, 50=18.83%, 100=80.86%, 250=0.19% 00:25:16.996 cpu : usr=1.95%, sys=4.06%, ctx=2296, majf=0, minf=1 00:25:16.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:25:16.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.996 issued rwts: total=0,9300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.996 job8: (groupid=0, jobs=1): err= 0: pid=3399550: Sun Nov 17 16:02:01 2024 00:25:16.996 write: IOPS=1010, BW=253MiB/s (265MB/s)(2534MiB/10033msec); 0 zone resets 00:25:16.996 slat (usec): min=23, max=14968, avg=959.12, stdev=2029.86 00:25:16.996 clat (usec): min=4774, max=96320, avg=62361.59, stdev=15645.82 00:25:16.996 lat (usec): min=4830, max=96366, avg=63320.71, stdev=15941.34 00:25:16.996 clat percentiles (usec): 00:25:16.996 | 1.00th=[35914], 5.00th=[38011], 10.00th=[39060], 20.00th=[41157], 00:25:16.996 | 30.00th=[56361], 40.00th=[58459], 50.00th=[61080], 60.00th=[73925], 00:25:16.996 | 70.00th=[76022], 80.00th=[77071], 90.00th=[79168], 95.00th=[81265], 00:25:16.996 | 99.00th=[85459], 99.50th=[86508], 99.90th=[89654], 99.95th=[90702], 00:25:16.996 | 99.99th=[90702] 00:25:16.996 bw ( KiB/s): min=200704, max=413184, per=7.96%, avg=257894.40, stdev=68014.83, samples=20 00:25:16.996 iops : min= 784, max= 1614, avg=1007.40, stdev=265.68, samples=20 00:25:16.996 lat (msec) : 10=0.01%, 20=0.23%, 50=24.33%, 100=75.44% 00:25:16.996 cpu : usr=2.42%, sys=3.93%, ctx=2461, majf=0, minf=1 00:25:16.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:25:16.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.996 issued rwts: total=0,10137,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.996 job9: (groupid=0, jobs=1): err= 0: pid=3399552: Sun Nov 17 16:02:01 2024 00:25:16.996 write: IOPS=1114, BW=279MiB/s (292MB/s)(2797MiB/10034msec); 0 zone resets 00:25:16.996 slat (usec): min=18, max=16663, avg=889.89, stdev=1923.43 00:25:16.996 clat (usec): min=12273, max=97067, avg=56494.43, stdev=21305.82 00:25:16.996 lat (msec): min=12, max=100, avg=57.38, stdev=21.66 00:25:16.996 clat percentiles (usec): 00:25:16.996 | 1.00th=[18744], 5.00th=[19530], 10.00th=[20579], 20.00th=[39060], 00:25:16.996 | 30.00th=[40633], 40.00th=[55837], 50.00th=[58983], 60.00th=[72877], 00:25:16.996 | 70.00th=[76022], 80.00th=[77071], 90.00th=[79168], 95.00th=[81265], 00:25:16.996 | 99.00th=[85459], 99.50th=[87557], 99.90th=[89654], 99.95th=[93848], 00:25:16.996 | 99.99th=[96994] 00:25:16.996 bw ( KiB/s): min=199680, max=786432, per=8.78%, avg=284774.40, stdev=136345.66, samples=20 00:25:16.996 iops : min= 780, max= 3072, avg=1112.40, stdev=532.60, samples=20 00:25:16.996 lat (msec) : 20=6.72%, 50=30.65%, 100=62.63% 00:25:16.996 cpu : usr=2.34%, sys=4.21%, ctx=2629, majf=0, minf=1 00:25:16.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:16.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.996 issued rwts: total=0,11187,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.996 job10: (groupid=0, jobs=1): err= 0: pid=3399555: Sun Nov 17 16:02:01 2024 00:25:16.996 write: IOPS=1067, BW=267MiB/s (280MB/s)(2681MiB/10043msec); 0 zone resets 00:25:16.996 slat (usec): min=27, max=11767, avg=926.76, stdev=1625.58 00:25:16.996 clat (msec): min=16, max=102, avg=58.99, stdev= 9.46 00:25:16.996 lat (msec): min=16, max=102, avg=59.92, stdev= 9.57 00:25:16.996 clat percentiles (msec): 00:25:16.996 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 58], 00:25:16.996 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 62], 00:25:16.996 | 70.00th=[ 63], 80.00th=[ 63], 90.00th=[ 65], 95.00th=[ 78], 00:25:16.996 | 99.00th=[ 84], 99.50th=[ 86], 99.90th=[ 92], 99.95th=[ 96], 00:25:16.996 | 99.99th=[ 101] 00:25:16.996 bw ( KiB/s): min=199168, max=398848, per=8.42%, avg=272896.00, stdev=40752.32, samples=20 00:25:16.996 iops : min= 778, max= 1558, avg=1066.00, stdev=159.19, samples=20 00:25:16.996 lat (msec) : 20=0.07%, 50=14.89%, 100=85.02%, 250=0.01% 00:25:16.996 cpu : usr=2.70%, sys=4.93%, ctx=2649, majf=0, minf=1 00:25:16.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:25:16.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.996 issued rwts: total=0,10723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.996 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.996 00:25:16.996 Run status group 0 (all jobs): 00:25:16.996 WRITE: bw=3166MiB/s (3319MB/s), 232MiB/s-589MiB/s (243MB/s-618MB/s), io=31.1GiB (33.4GB), run=10033-10059msec 00:25:16.996 00:25:16.996 Disk stats (read/write): 00:25:16.996 nvme0n1: ios=49/20890, merge=0/0, ticks=18/1217811, in_queue=1217829, util=96.65% 00:25:16.997 nvme10n1: ios=0/19854, merge=0/0, ticks=0/1215291, in_queue=1215291, util=96.83% 00:25:16.997 nvme1n1: ios=0/20687, merge=0/0, ticks=0/1215879, in_queue=1215879, util=97.17% 00:25:16.997 nvme2n1: ios=0/47076, merge=0/0, ticks=0/1223439, in_queue=1223439, util=97.37% 00:25:16.997 nvme3n1: ios=0/19892, merge=0/0, ticks=0/1218758, in_queue=1218758, util=97.45% 00:25:16.997 nvme4n1: ios=0/21161, merge=0/0, ticks=0/1217678, in_queue=1217678, util=97.99% 00:25:16.997 nvme5n1: ios=0/19895, merge=0/0, ticks=0/1217240, in_queue=1217240, util=98.05% 00:25:16.997 nvme6n1: ios=0/18072, merge=0/0, ticks=0/1217439, in_queue=1217439, util=98.18% 00:25:16.997 nvme7n1: ios=0/19737, merge=0/0, ticks=0/1219872, in_queue=1219872, util=98.62% 00:25:16.997 nvme8n1: ios=0/21841, merge=0/0, ticks=0/1218478, in_queue=1218478, util=98.85% 00:25:16.997 nvme9n1: ios=0/21057, merge=0/0, ticks=0/1216286, in_queue=1216286, util=99.02% 00:25:16.997 16:02:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:16.997 16:02:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:16.997 16:02:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.997 16:02:01 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:17.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.564 16:02:02 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:18.500 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:18.500 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:18.500 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:18.500 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:18.500 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:25:18.500 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:18.500 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:25:18.500 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:18.500 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:18.500 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.500 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.501 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.501 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:18.501 16:02:03 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:19.436 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:19.436 16:02:04 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:20.373 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.373 16:02:05 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:21.750 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:21.750 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:21.750 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:21.750 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:21.751 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:25:21.751 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:21.751 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:25:21.751 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:21.751 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:21.751 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.751 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.751 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.751 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.751 16:02:06 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:22.686 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:22.686 16:02:07 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:23.622 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:23.622 16:02:08 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:24.559 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.559 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.560 16:02:09 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:25.495 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:25.495 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:25.495 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:25.495 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:25.495 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:25:25.495 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:25:25.495 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:25.495 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:25.496 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:25.496 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.496 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:25.496 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.496 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.496 16:02:10 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:26.433 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:26.433 16:02:11 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:27.367 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:27.367 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:25:27.627 rmmod nvme_rdma 00:25:27.627 rmmod nvme_fabrics 00:25:27.627 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:27.627 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:25:27.627 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:25:27.627 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 3391287 ']' 00:25:27.627 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 3391287 00:25:27.627 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 3391287 ']' 00:25:27.627 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 3391287 00:25:27.627 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:25:27.627 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.627 16:02:12 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3391287 00:25:27.627 16:02:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:27.627 16:02:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:27.627 16:02:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3391287' 00:25:27.627 killing process with pid 3391287 00:25:27.627 16:02:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 3391287 00:25:27.627 16:02:13 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 3391287 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:25:31.816 00:25:31.816 real 1m19.745s 00:25:31.816 user 5m7.918s 00:25:31.816 sys 0m20.257s 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.816 ************************************ 00:25:31.816 END TEST nvmf_multiconnection 00:25:31.816 ************************************ 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:31.816 ************************************ 00:25:31.816 START TEST nvmf_initiator_timeout 00:25:31.816 ************************************ 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:25:31.816 * Looking for test storage... 00:25:31.816 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:25:31.816 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:31.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.817 --rc genhtml_branch_coverage=1 00:25:31.817 --rc genhtml_function_coverage=1 00:25:31.817 --rc genhtml_legend=1 00:25:31.817 --rc geninfo_all_blocks=1 00:25:31.817 --rc geninfo_unexecuted_blocks=1 00:25:31.817 00:25:31.817 ' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:31.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.817 --rc genhtml_branch_coverage=1 00:25:31.817 --rc genhtml_function_coverage=1 00:25:31.817 --rc genhtml_legend=1 00:25:31.817 --rc geninfo_all_blocks=1 00:25:31.817 --rc geninfo_unexecuted_blocks=1 00:25:31.817 00:25:31.817 ' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:31.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.817 --rc genhtml_branch_coverage=1 00:25:31.817 --rc genhtml_function_coverage=1 00:25:31.817 --rc genhtml_legend=1 00:25:31.817 --rc geninfo_all_blocks=1 00:25:31.817 --rc geninfo_unexecuted_blocks=1 00:25:31.817 00:25:31.817 ' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:31.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.817 --rc genhtml_branch_coverage=1 00:25:31.817 --rc genhtml_function_coverage=1 00:25:31.817 --rc genhtml_legend=1 00:25:31.817 --rc geninfo_all_blocks=1 00:25:31.817 --rc geninfo_unexecuted_blocks=1 00:25:31.817 00:25:31.817 ' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.817 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:25:31.817 16:02:16 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:38.427 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:38.427 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:38.427 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:38.427 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:38.427 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # rdma_device_init 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # uname 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@66 -- # modprobe ib_cm 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@67 -- # modprobe ib_core 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@68 -- # modprobe ib_umad 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@70 -- # modprobe iw_cm 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@530 -- # allocate_nic_ips 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # get_rdma_if_list 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:25:38.428 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:38.428 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:38.428 altname enp217s0f0np0 00:25:38.428 altname ens818f0np0 00:25:38.428 inet 192.168.100.8/24 scope global mlx_0_0 00:25:38.428 valid_lft forever preferred_lft forever 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:25:38.428 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:38.428 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:38.428 altname enp217s0f1np1 00:25:38.428 altname ens818f1np1 00:25:38.428 inet 192.168.100.9/24 scope global mlx_0_1 00:25:38.428 valid_lft forever preferred_lft forever 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # get_rdma_if_list 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_0 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@108 -- # echo mlx_0_1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@109 -- # continue 2 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # awk '{print $4}' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # cut -d/ -f1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:25:38.428 192.168.100.9' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:25:38.428 192.168.100.9' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # head -n 1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:25:38.428 192.168.100.9' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # tail -n +2 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # head -n 1 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:25:38.428 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=3406709 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 3406709 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 3406709 ']' 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:38.429 16:02:23 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:38.429 [2024-11-17 16:02:23.647531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:38.429 [2024-11-17 16:02:23.647648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.429 [2024-11-17 16:02:23.781690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:38.429 [2024-11-17 16:02:23.878742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.429 [2024-11-17 16:02:23.878790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.429 [2024-11-17 16:02:23.878802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:38.429 [2024-11-17 16:02:23.878815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:38.429 [2024-11-17 16:02:23.878825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.429 [2024-11-17 16:02:23.881321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:38.429 [2024-11-17 16:02:23.881396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.429 [2024-11-17 16:02:23.881456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.429 [2024-11-17 16:02:23.881465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:38.996 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:38.996 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:25:38.996 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:38.996 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:38.996 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:38.996 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:38.996 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:38.996 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:38.996 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.996 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.304 Malloc0 00:25:39.304 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.304 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:39.304 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.304 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.304 Delay0 00:25:39.304 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.304 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:39.304 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.304 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.304 [2024-11-17 16:02:24.613093] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a040/0x7f891a7a6940) succeed. 00:25:39.304 [2024-11-17 16:02:24.622879] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a1c0/0x7f891a762940) succeed. 00:25:39.579 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.579 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:39.580 [2024-11-17 16:02:24.902196] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.580 16:02:24 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:25:40.512 16:02:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:40.513 16:02:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:25:40.513 16:02:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:40.513 16:02:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:40.513 16:02:25 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:25:42.410 16:02:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:42.410 16:02:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:42.410 16:02:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:25:42.410 16:02:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:42.410 16:02:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:42.410 16:02:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:25:42.410 16:02:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3407384 00:25:42.410 16:02:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:42.410 16:02:27 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:42.410 [global] 00:25:42.410 thread=1 00:25:42.410 invalidate=1 00:25:42.410 rw=write 00:25:42.410 time_based=1 00:25:42.410 runtime=60 00:25:42.410 ioengine=libaio 00:25:42.410 direct=1 00:25:42.410 bs=4096 00:25:42.410 iodepth=1 00:25:42.410 norandommap=0 00:25:42.410 numjobs=1 00:25:42.410 00:25:42.410 verify_dump=1 00:25:42.411 verify_backlog=512 00:25:42.411 verify_state_save=0 00:25:42.411 do_verify=1 00:25:42.411 verify=crc32c-intel 00:25:42.411 [job0] 00:25:42.411 filename=/dev/nvme0n1 00:25:42.679 Could not set queue depth (nvme0n1) 00:25:42.940 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:42.940 fio-3.35 00:25:42.940 Starting 1 thread 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.475 true 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.475 true 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.475 true 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:45.475 true 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.475 16:02:30 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.761 true 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.761 true 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.761 true 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:48.761 true 00:25:48.761 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.762 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:48.762 16:02:33 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3407384 00:26:44.990 00:26:44.990 job0: (groupid=0, jobs=1): err= 0: pid=3407580: Sun Nov 17 16:03:28 2024 00:26:44.990 read: IOPS=1134, BW=4540KiB/s (4649kB/s)(266MiB/60000msec) 00:26:44.990 slat (usec): min=5, max=11906, avg= 9.62, stdev=59.87 00:26:44.990 clat (usec): min=80, max=42514k, avg=740.37, stdev=162919.05 00:26:44.990 lat (usec): min=102, max=42514k, avg=749.98, stdev=162919.06 00:26:44.990 clat percentiles (usec): 00:26:44.990 | 1.00th=[ 100], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 109], 00:26:44.990 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 115], 60.00th=[ 117], 00:26:44.990 | 70.00th=[ 119], 80.00th=[ 122], 90.00th=[ 125], 95.00th=[ 129], 00:26:44.990 | 99.00th=[ 172], 99.50th=[ 231], 99.90th=[ 289], 99.95th=[ 322], 00:26:44.990 | 99.99th=[ 545] 00:26:44.990 write: IOPS=1140, BW=4560KiB/s (4670kB/s)(267MiB/60000msec); 0 zone resets 00:26:44.990 slat (usec): min=6, max=375, avg=12.36, stdev= 2.88 00:26:44.990 clat (usec): min=80, max=594, avg=112.78, stdev=14.53 00:26:44.990 lat (usec): min=97, max=606, avg=125.15, stdev=14.80 00:26:44.990 clat percentiles (usec): 00:26:44.990 | 1.00th=[ 98], 5.00th=[ 101], 10.00th=[ 103], 20.00th=[ 105], 00:26:44.990 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 112], 60.00th=[ 114], 00:26:44.990 | 70.00th=[ 116], 80.00th=[ 118], 90.00th=[ 122], 95.00th=[ 125], 00:26:44.990 | 99.00th=[ 147], 99.50th=[ 215], 99.90th=[ 289], 99.95th=[ 306], 00:26:44.990 | 99.99th=[ 502] 00:26:44.990 bw ( KiB/s): min= 4096, max=16440, per=100.00%, avg=15213.49, stdev=2173.49, samples=35 00:26:44.990 iops : min= 1024, max= 4110, avg=3803.43, stdev=543.35, samples=35 00:26:44.990 lat (usec) : 100=2.42%, 250=97.30%, 500=0.27%, 750=0.01% 00:26:44.990 lat (msec) : >=2000=0.01% 00:26:44.990 cpu : usr=1.76%, sys=2.73%, ctx=136511, majf=0, minf=236 00:26:44.990 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:44.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:44.990 issued rwts: total=68096,68403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:44.990 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:44.990 00:26:44.990 Run status group 0 (all jobs): 00:26:44.990 READ: bw=4540KiB/s (4649kB/s), 4540KiB/s-4540KiB/s (4649kB/s-4649kB/s), io=266MiB (279MB), run=60000-60000msec 00:26:44.990 WRITE: bw=4560KiB/s (4670kB/s), 4560KiB/s-4560KiB/s (4670kB/s-4670kB/s), io=267MiB (280MB), run=60000-60000msec 00:26:44.990 00:26:44.990 Disk stats (read/write): 00:26:44.990 nvme0n1: ios=67908/68096, merge=0/0, ticks=7415/7085, in_queue=14500, util=99.61% 00:26:44.990 16:03:28 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:44.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:44.990 nvmf hotplug test: fio successful as expected 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:26:44.990 rmmod nvme_rdma 00:26:44.990 rmmod nvme_fabrics 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 3406709 ']' 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 3406709 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 3406709 ']' 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 3406709 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3406709 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3406709' 00:26:44.990 killing process with pid 3406709 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 3406709 00:26:44.990 16:03:29 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 3406709 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:26:45.930 00:26:45.930 real 1m14.637s 00:26:45.930 user 4m39.291s 00:26:45.930 sys 0m7.855s 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:45.930 ************************************ 00:26:45.930 END TEST nvmf_initiator_timeout 00:26:45.930 ************************************ 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:45.930 ************************************ 00:26:45.930 START TEST nvmf_srq_overwhelm 00:26:45.930 ************************************ 00:26:45.930 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:26:46.189 * Looking for test storage... 00:26:46.189 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:26:46.189 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:46.189 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lcov --version 00:26:46.189 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:46.189 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:46.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.190 --rc genhtml_branch_coverage=1 00:26:46.190 --rc genhtml_function_coverage=1 00:26:46.190 --rc genhtml_legend=1 00:26:46.190 --rc geninfo_all_blocks=1 00:26:46.190 --rc geninfo_unexecuted_blocks=1 00:26:46.190 00:26:46.190 ' 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:46.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.190 --rc genhtml_branch_coverage=1 00:26:46.190 --rc genhtml_function_coverage=1 00:26:46.190 --rc genhtml_legend=1 00:26:46.190 --rc geninfo_all_blocks=1 00:26:46.190 --rc geninfo_unexecuted_blocks=1 00:26:46.190 00:26:46.190 ' 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:46.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.190 --rc genhtml_branch_coverage=1 00:26:46.190 --rc genhtml_function_coverage=1 00:26:46.190 --rc genhtml_legend=1 00:26:46.190 --rc geninfo_all_blocks=1 00:26:46.190 --rc geninfo_unexecuted_blocks=1 00:26:46.190 00:26:46.190 ' 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:46.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.190 --rc genhtml_branch_coverage=1 00:26:46.190 --rc genhtml_function_coverage=1 00:26:46.190 --rc genhtml_legend=1 00:26:46.190 --rc geninfo_all_blocks=1 00:26:46.190 --rc geninfo_unexecuted_blocks=1 00:26:46.190 00:26:46.190 ' 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:46.190 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:46.191 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:26:46.191 16:03:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:52.865 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:26:52.865 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:26:52.866 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:26:52.866 Found net devices under 0000:d9:00.0: mlx_0_0 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:26:52.866 Found net devices under 0000:d9:00.1: mlx_0_1 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # is_hw=yes 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # rdma_device_init 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@530 -- # allocate_nic_ips 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:26:52.866 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:52.866 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:26:52.866 altname enp217s0f0np0 00:26:52.866 altname ens818f0np0 00:26:52.866 inet 192.168.100.8/24 scope global mlx_0_0 00:26:52.866 valid_lft forever preferred_lft forever 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:26:52.866 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:26:52.866 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:26:52.866 altname enp217s0f1np1 00:26:52.866 altname ens818f1np1 00:26:52.866 inet 192.168.100.9/24 scope global mlx_0_1 00:26:52.866 valid_lft forever preferred_lft forever 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # return 0 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:52.866 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:26:52.867 192.168.100.9' 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:26:52.867 192.168.100.9' 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # head -n 1 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # head -n 1 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:26:52.867 192.168.100.9' 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # tail -n +2 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # nvmfpid=3421764 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # waitforlisten 3421764 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # '[' -z 3421764 ']' 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:52.867 16:03:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:53.126 [2024-11-17 16:03:38.464497] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:53.126 [2024-11-17 16:03:38.464596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:53.126 [2024-11-17 16:03:38.598083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:53.385 [2024-11-17 16:03:38.700628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:53.385 [2024-11-17 16:03:38.700669] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:53.386 [2024-11-17 16:03:38.700681] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:53.386 [2024-11-17 16:03:38.700694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:53.386 [2024-11-17 16:03:38.700704] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:53.386 [2024-11-17 16:03:38.703171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.386 [2024-11-17 16:03:38.703185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:53.386 [2024-11-17 16:03:38.703282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.386 [2024-11-17 16:03:38.703290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@868 -- # return 0 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:53.953 [2024-11-17 16:03:39.338282] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f1f7a5bd940) succeed. 00:26:53.953 [2024-11-17 16:03:39.347738] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f1f7a578940) succeed. 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.953 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:54.211 Malloc0 00:26:54.211 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.211 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:26:54.211 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.211 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:54.211 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.211 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:26:54.211 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.211 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:54.211 [2024-11-17 16:03:39.557151] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:26:54.212 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.212 16:03:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:55.146 Malloc1 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.146 16:03:40 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme1n1 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme1n1 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:56.519 Malloc2 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:56.519 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.520 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:26:56.520 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.520 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:56.520 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.520 16:03:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme2n1 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme2n1 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:57.455 Malloc3 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.455 16:03:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme3n1 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme3n1 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.389 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:58.647 Malloc4 00:26:58.647 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.647 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:58.647 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.647 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:58.647 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.647 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:26:58.647 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.647 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:58.647 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.647 16:03:43 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:26:59.582 16:03:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:26:59.582 16:03:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:26:59.582 16:03:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:59.582 16:03:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme4n1 00:26:59.583 16:03:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:59.583 16:03:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme4n1 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:59.583 Malloc5 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.583 16:03:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:27:00.978 16:03:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:27:00.978 16:03:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1239 -- # local i=0 00:27:00.978 16:03:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:27:00.978 16:03:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # grep -q -w nvme5n1 00:27:00.978 16:03:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:27:00.978 16:03:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # grep -q -w nvme5n1 00:27:00.978 16:03:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1250 -- # return 0 00:27:00.978 16:03:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:27:00.978 [global] 00:27:00.978 thread=1 00:27:00.978 invalidate=1 00:27:00.978 rw=read 00:27:00.978 time_based=1 00:27:00.978 runtime=10 00:27:00.978 ioengine=libaio 00:27:00.978 direct=1 00:27:00.978 bs=1048576 00:27:00.978 iodepth=128 00:27:00.978 norandommap=1 00:27:00.978 numjobs=13 00:27:00.978 00:27:00.978 [job0] 00:27:00.978 filename=/dev/nvme0n1 00:27:00.978 [job1] 00:27:00.978 filename=/dev/nvme1n1 00:27:00.978 [job2] 00:27:00.978 filename=/dev/nvme2n1 00:27:00.978 [job3] 00:27:00.978 filename=/dev/nvme3n1 00:27:00.978 [job4] 00:27:00.978 filename=/dev/nvme4n1 00:27:00.978 [job5] 00:27:00.978 filename=/dev/nvme5n1 00:27:00.978 Could not set queue depth (nvme0n1) 00:27:00.978 Could not set queue depth (nvme1n1) 00:27:00.978 Could not set queue depth (nvme2n1) 00:27:00.978 Could not set queue depth (nvme3n1) 00:27:00.978 Could not set queue depth (nvme4n1) 00:27:00.978 Could not set queue depth (nvme5n1) 00:27:01.236 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:01.236 ... 00:27:01.236 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:01.236 ... 00:27:01.236 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:01.236 ... 00:27:01.236 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:01.236 ... 00:27:01.236 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:01.236 ... 00:27:01.236 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:27:01.236 ... 00:27:01.236 fio-3.35 00:27:01.236 Starting 78 threads 00:27:13.431 00:27:13.431 job0: (groupid=0, jobs=1): err= 0: pid=3423307: Sun Nov 17 16:03:57 2024 00:27:13.431 read: IOPS=34, BW=34.6MiB/s (36.3MB/s)(367MiB/10596msec) 00:27:13.431 slat (usec): min=33, max=2161.0k, avg=28859.11, stdev=203952.66 00:27:13.431 clat (usec): min=1726, max=5158.1k, avg=2099327.53, stdev=1686558.76 00:27:13.431 lat (msec): min=632, max=5166, avg=2128.19, stdev=1692.24 00:27:13.431 clat percentiles (msec): 00:27:13.431 | 1.00th=[ 634], 5.00th=[ 659], 10.00th=[ 693], 20.00th=[ 726], 00:27:13.431 | 30.00th=[ 768], 40.00th=[ 810], 50.00th=[ 844], 60.00th=[ 1854], 00:27:13.431 | 70.00th=[ 4178], 80.00th=[ 4396], 90.00th=[ 4597], 95.00th=[ 4665], 00:27:13.431 | 99.00th=[ 4732], 99.50th=[ 5067], 99.90th=[ 5134], 99.95th=[ 5134], 00:27:13.431 | 99.99th=[ 5134] 00:27:13.431 bw ( KiB/s): min=36790, max=185996, per=3.62%, avg=122256.50, stdev=66451.60, samples=4 00:27:13.431 iops : min= 35, max= 181, avg=119.00, stdev=65.09, samples=4 00:27:13.431 lat (msec) : 2=0.27%, 750=24.80%, 1000=29.97%, 2000=8.17%, >=2000=36.78% 00:27:13.431 cpu : usr=0.03%, sys=1.41%, ctx=379, majf=0, minf=32769 00:27:13.431 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.7%, >=64=82.8% 00:27:13.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.431 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:27:13.431 issued rwts: total=367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.431 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.431 job0: (groupid=0, jobs=1): err= 0: pid=3423308: Sun Nov 17 16:03:57 2024 00:27:13.431 read: IOPS=58, BW=58.3MiB/s (61.1MB/s)(621MiB/10649msec) 00:27:13.431 slat (usec): min=43, max=2107.7k, avg=17033.00, stdev=143113.01 00:27:13.431 clat (msec): min=65, max=6788, avg=1918.60, stdev=2283.14 00:27:13.431 lat (msec): min=391, max=6790, avg=1935.63, stdev=2287.80 00:27:13.431 clat percentiles (msec): 00:27:13.431 | 1.00th=[ 393], 5.00th=[ 401], 10.00th=[ 435], 20.00th=[ 502], 00:27:13.431 | 30.00th=[ 625], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 802], 00:27:13.431 | 70.00th=[ 1200], 80.00th=[ 4245], 90.00th=[ 6611], 95.00th=[ 6678], 00:27:13.431 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6812], 99.95th=[ 6812], 00:27:13.431 | 99.99th=[ 6812] 00:27:13.431 bw ( KiB/s): min= 2052, max=311296, per=3.75%, avg=126460.88, stdev=108488.27, samples=8 00:27:13.432 iops : min= 2, max= 304, avg=123.38, stdev=106.09, samples=8 00:27:13.432 lat (msec) : 100=0.16%, 500=19.00%, 750=34.46%, 1000=11.43%, 2000=10.47% 00:27:13.432 lat (msec) : >=2000=24.48% 00:27:13.432 cpu : usr=0.04%, sys=1.61%, ctx=687, majf=0, minf=32769 00:27:13.432 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.9% 00:27:13.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.432 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.432 issued rwts: total=621,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.432 job0: (groupid=0, jobs=1): err= 0: pid=3423309: Sun Nov 17 16:03:57 2024 00:27:13.432 read: IOPS=56, BW=56.1MiB/s (58.8MB/s)(599MiB/10686msec) 00:27:13.432 slat (usec): min=39, max=2175.5k, avg=17735.70, stdev=146538.48 00:27:13.432 clat (msec): min=57, max=10539, avg=1814.69, stdev=2180.31 00:27:13.432 lat (msec): min=459, max=10657, avg=1832.42, stdev=2198.28 00:27:13.432 clat percentiles (msec): 00:27:13.432 | 1.00th=[ 460], 5.00th=[ 472], 10.00th=[ 481], 20.00th=[ 493], 00:27:13.432 | 30.00th=[ 502], 40.00th=[ 514], 50.00th=[ 535], 60.00th=[ 642], 00:27:13.432 | 70.00th=[ 1267], 80.00th=[ 5000], 90.00th=[ 5805], 95.00th=[ 6208], 00:27:13.432 | 99.00th=[ 6544], 99.50th=[ 8658], 99.90th=[10537], 99.95th=[10537], 00:27:13.432 | 99.99th=[10537] 00:27:13.432 bw ( KiB/s): min= 1587, max=247808, per=2.86%, avg=96588.80, stdev=90530.26, samples=10 00:27:13.432 iops : min= 1, max= 242, avg=94.20, stdev=88.42, samples=10 00:27:13.432 lat (msec) : 100=0.17%, 500=26.21%, 750=34.89%, 1000=4.17%, 2000=10.35% 00:27:13.432 lat (msec) : >=2000=24.21% 00:27:13.432 cpu : usr=0.05%, sys=1.50%, ctx=594, majf=0, minf=32769 00:27:13.432 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.3%, >=64=89.5% 00:27:13.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.432 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.432 issued rwts: total=599,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.432 job0: (groupid=0, jobs=1): err= 0: pid=3423310: Sun Nov 17 16:03:57 2024 00:27:13.432 read: IOPS=6, BW=7055KiB/s (7225kB/s)(74.0MiB/10740msec) 00:27:13.432 slat (usec): min=442, max=4253.3k, avg=144665.01, stdev=604142.48 00:27:13.432 clat (msec): min=34, max=10738, avg=4430.28, stdev=3610.12 00:27:13.432 lat (msec): min=1449, max=10739, avg=4574.94, stdev=3645.88 00:27:13.432 clat percentiles (msec): 00:27:13.432 | 1.00th=[ 35], 5.00th=[ 1452], 10.00th=[ 1469], 20.00th=[ 1569], 00:27:13.432 | 30.00th=[ 1670], 40.00th=[ 1787], 50.00th=[ 1905], 60.00th=[ 6275], 00:27:13.432 | 70.00th=[ 6409], 80.00th=[ 8490], 90.00th=[10671], 95.00th=[10671], 00:27:13.432 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:27:13.432 | 99.99th=[10805] 00:27:13.432 lat (msec) : 50=1.35%, 2000=48.65%, >=2000=50.00% 00:27:13.432 cpu : usr=0.00%, sys=0.50%, ctx=165, majf=0, minf=18945 00:27:13.432 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.8%, 16=21.6%, 32=43.2%, >=64=14.9% 00:27:13.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.432 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:13.432 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.432 job0: (groupid=0, jobs=1): err= 0: pid=3423311: Sun Nov 17 16:03:57 2024 00:27:13.432 read: IOPS=30, BW=30.3MiB/s (31.8MB/s)(324MiB/10683msec) 00:27:13.432 slat (usec): min=40, max=2023.9k, avg=32812.25, stdev=187960.65 00:27:13.432 clat (msec): min=50, max=7163, avg=2464.11, stdev=1351.99 00:27:13.432 lat (msec): min=1070, max=7246, avg=2496.92, stdev=1371.00 00:27:13.432 clat percentiles (msec): 00:27:13.432 | 1.00th=[ 1167], 5.00th=[ 1183], 10.00th=[ 1385], 20.00th=[ 1552], 00:27:13.432 | 30.00th=[ 1687], 40.00th=[ 1804], 50.00th=[ 2005], 60.00th=[ 2165], 00:27:13.432 | 70.00th=[ 3071], 80.00th=[ 3272], 90.00th=[ 3373], 95.00th=[ 7013], 00:27:13.432 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7148], 99.95th=[ 7148], 00:27:13.432 | 99.99th=[ 7148] 00:27:13.432 bw ( KiB/s): min= 2052, max=153600, per=1.71%, avg=57619.00, stdev=50446.25, samples=7 00:27:13.432 iops : min= 2, max= 150, avg=56.14, stdev=49.25, samples=7 00:27:13.432 lat (msec) : 100=0.31%, 2000=48.77%, >=2000=50.93% 00:27:13.432 cpu : usr=0.02%, sys=1.13%, ctx=456, majf=0, minf=32769 00:27:13.432 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=4.9%, 32=9.9%, >=64=80.6% 00:27:13.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.432 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:13.432 issued rwts: total=324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.432 job0: (groupid=0, jobs=1): err= 0: pid=3423312: Sun Nov 17 16:03:57 2024 00:27:13.432 read: IOPS=31, BW=31.4MiB/s (32.9MB/s)(339MiB/10813msec) 00:27:13.432 slat (usec): min=45, max=2093.4k, avg=31703.26, stdev=201464.05 00:27:13.432 clat (msec): min=63, max=5352, avg=3076.79, stdev=1262.60 00:27:13.432 lat (msec): min=1095, max=6279, avg=3108.49, stdev=1266.00 00:27:13.432 clat percentiles (msec): 00:27:13.432 | 1.00th=[ 1099], 5.00th=[ 1099], 10.00th=[ 1938], 20.00th=[ 2039], 00:27:13.432 | 30.00th=[ 2123], 40.00th=[ 2198], 50.00th=[ 3104], 60.00th=[ 3406], 00:27:13.432 | 70.00th=[ 3675], 80.00th=[ 3943], 90.00th=[ 5269], 95.00th=[ 5336], 00:27:13.432 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:27:13.432 | 99.99th=[ 5336] 00:27:13.432 bw ( KiB/s): min=12288, max=131072, per=2.13%, avg=72021.33, stdev=45600.40, samples=6 00:27:13.432 iops : min= 12, max= 128, avg=70.33, stdev=44.53, samples=6 00:27:13.432 lat (msec) : 100=0.29%, 2000=13.86%, >=2000=85.84% 00:27:13.432 cpu : usr=0.01%, sys=1.17%, ctx=385, majf=0, minf=32331 00:27:13.432 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.4%, >=64=81.4% 00:27:13.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.432 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:13.432 issued rwts: total=339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.432 job0: (groupid=0, jobs=1): err= 0: pid=3423313: Sun Nov 17 16:03:57 2024 00:27:13.432 read: IOPS=72, BW=72.2MiB/s (75.7MB/s)(774MiB/10720msec) 00:27:13.432 slat (usec): min=43, max=2095.4k, avg=13787.88, stdev=131418.33 00:27:13.432 clat (msec): min=43, max=7737, avg=1686.53, stdev=2551.42 00:27:13.432 lat (msec): min=415, max=7738, avg=1700.32, stdev=2558.31 00:27:13.432 clat percentiles (msec): 00:27:13.432 | 1.00th=[ 418], 5.00th=[ 422], 10.00th=[ 422], 20.00th=[ 447], 00:27:13.432 | 30.00th=[ 550], 40.00th=[ 550], 50.00th=[ 567], 60.00th=[ 575], 00:27:13.432 | 70.00th=[ 651], 80.00th=[ 709], 90.00th=[ 7483], 95.00th=[ 7617], 00:27:13.432 | 99.00th=[ 7684], 99.50th=[ 7752], 99.90th=[ 7752], 99.95th=[ 7752], 00:27:13.432 | 99.99th=[ 7752] 00:27:13.432 bw ( KiB/s): min= 1519, max=312718, per=3.92%, avg=132390.10, stdev=119790.31, samples=10 00:27:13.432 iops : min= 1, max= 305, avg=129.20, stdev=116.98, samples=10 00:27:13.432 lat (msec) : 50=0.13%, 500=25.45%, 750=56.85%, 2000=0.52%, >=2000=17.05% 00:27:13.432 cpu : usr=0.02%, sys=1.68%, ctx=735, majf=0, minf=32769 00:27:13.432 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:27:13.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.432 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.432 issued rwts: total=774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.432 job0: (groupid=0, jobs=1): err= 0: pid=3423314: Sun Nov 17 16:03:57 2024 00:27:13.432 read: IOPS=30, BW=30.0MiB/s (31.5MB/s)(321MiB/10690msec) 00:27:13.432 slat (usec): min=36, max=2133.3k, avg=31151.03, stdev=202286.15 00:27:13.432 clat (msec): min=679, max=8474, avg=3915.09, stdev=3449.76 00:27:13.432 lat (msec): min=680, max=8477, avg=3946.24, stdev=3456.87 00:27:13.432 clat percentiles (msec): 00:27:13.432 | 1.00th=[ 693], 5.00th=[ 760], 10.00th=[ 785], 20.00th=[ 927], 00:27:13.432 | 30.00th=[ 1036], 40.00th=[ 1200], 50.00th=[ 1368], 60.00th=[ 4010], 00:27:13.432 | 70.00th=[ 8020], 80.00th=[ 8221], 90.00th=[ 8423], 95.00th=[ 8490], 00:27:13.432 | 99.00th=[ 8490], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:27:13.432 | 99.99th=[ 8490] 00:27:13.432 bw ( KiB/s): min= 2048, max=166000, per=1.66%, avg=56189.71, stdev=67416.10, samples=7 00:27:13.432 iops : min= 2, max= 162, avg=54.86, stdev=65.81, samples=7 00:27:13.432 lat (msec) : 750=4.36%, 1000=20.56%, 2000=33.33%, >=2000=41.74% 00:27:13.432 cpu : usr=0.01%, sys=1.02%, ctx=481, majf=0, minf=32769 00:27:13.432 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.4% 00:27:13.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.432 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:13.432 issued rwts: total=321,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.432 job0: (groupid=0, jobs=1): err= 0: pid=3423315: Sun Nov 17 16:03:57 2024 00:27:13.432 read: IOPS=80, BW=80.9MiB/s (84.9MB/s)(816MiB/10084msec) 00:27:13.432 slat (usec): min=42, max=2081.8k, avg=12272.30, stdev=83473.96 00:27:13.432 clat (msec): min=63, max=4813, avg=1082.89, stdev=816.92 00:27:13.432 lat (msec): min=150, max=4815, avg=1095.16, stdev=826.97 00:27:13.432 clat percentiles (msec): 00:27:13.432 | 1.00th=[ 169], 5.00th=[ 447], 10.00th=[ 709], 20.00th=[ 835], 00:27:13.432 | 30.00th=[ 860], 40.00th=[ 885], 50.00th=[ 911], 60.00th=[ 953], 00:27:13.432 | 70.00th=[ 986], 80.00th=[ 1045], 90.00th=[ 1250], 95.00th=[ 3977], 00:27:13.432 | 99.00th=[ 4665], 99.50th=[ 4732], 99.90th=[ 4799], 99.95th=[ 4799], 00:27:13.432 | 99.99th=[ 4799] 00:27:13.432 bw ( KiB/s): min=55296, max=200303, per=3.79%, avg=128028.00, stdev=40332.85, samples=11 00:27:13.432 iops : min= 54, max= 195, avg=124.91, stdev=39.23, samples=11 00:27:13.432 lat (msec) : 100=0.12%, 250=1.96%, 500=3.80%, 750=8.82%, 1000=58.09% 00:27:13.432 lat (msec) : 2000=21.69%, >=2000=5.51% 00:27:13.432 cpu : usr=0.03%, sys=1.81%, ctx=825, majf=0, minf=32769 00:27:13.432 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:27:13.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.432 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.432 issued rwts: total=816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.433 job0: (groupid=0, jobs=1): err= 0: pid=3423316: Sun Nov 17 16:03:57 2024 00:27:13.433 read: IOPS=65, BW=65.8MiB/s (69.0MB/s)(661MiB/10043msec) 00:27:13.433 slat (usec): min=39, max=2088.2k, avg=15124.17, stdev=110808.51 00:27:13.433 clat (msec): min=41, max=5344, avg=1218.22, stdev=912.91 00:27:13.433 lat (msec): min=43, max=5346, avg=1233.34, stdev=926.91 00:27:13.433 clat percentiles (msec): 00:27:13.433 | 1.00th=[ 51], 5.00th=[ 228], 10.00th=[ 439], 20.00th=[ 852], 00:27:13.433 | 30.00th=[ 911], 40.00th=[ 953], 50.00th=[ 995], 60.00th=[ 1099], 00:27:13.433 | 70.00th=[ 1284], 80.00th=[ 1502], 90.00th=[ 1754], 95.00th=[ 3373], 00:27:13.433 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:27:13.433 | 99.99th=[ 5336] 00:27:13.433 bw ( KiB/s): min=59392, max=157696, per=3.08%, avg=103935.56, stdev=39615.79, samples=9 00:27:13.433 iops : min= 58, max= 154, avg=101.33, stdev=38.57, samples=9 00:27:13.433 lat (msec) : 50=0.91%, 100=1.21%, 250=3.78%, 500=5.45%, 750=4.69% 00:27:13.433 lat (msec) : 1000=35.85%, 2000=42.81%, >=2000=5.30% 00:27:13.433 cpu : usr=0.04%, sys=1.51%, ctx=700, majf=0, minf=32769 00:27:13.433 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:27:13.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.433 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.433 issued rwts: total=661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.433 job0: (groupid=0, jobs=1): err= 0: pid=3423317: Sun Nov 17 16:03:57 2024 00:27:13.433 read: IOPS=10, BW=10.5MiB/s (11.0MB/s)(113MiB/10748msec) 00:27:13.433 slat (usec): min=636, max=2083.6k, avg=95091.88, stdev=397465.72 00:27:13.433 clat (usec): min=1689, max=10745k, avg=7330782.33, stdev=3606025.44 00:27:13.433 lat (msec): min=1633, max=10747, avg=7425.87, stdev=3552.31 00:27:13.433 clat percentiles (msec): 00:27:13.433 | 1.00th=[ 1636], 5.00th=[ 1737], 10.00th=[ 1871], 20.00th=[ 4077], 00:27:13.433 | 30.00th=[ 4111], 40.00th=[ 6342], 50.00th=[ 8557], 60.00th=[10671], 00:27:13.433 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10671], 95.00th=[10805], 00:27:13.433 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:27:13.433 | 99.99th=[10805] 00:27:13.433 lat (msec) : 2=0.88%, 2000=13.27%, >=2000=85.84% 00:27:13.433 cpu : usr=0.02%, sys=0.95%, ctx=237, majf=0, minf=28929 00:27:13.433 IO depths : 1=0.9%, 2=1.8%, 4=3.5%, 8=7.1%, 16=14.2%, 32=28.3%, >=64=44.2% 00:27:13.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.433 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:13.433 issued rwts: total=113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.433 job0: (groupid=0, jobs=1): err= 0: pid=3423318: Sun Nov 17 16:03:57 2024 00:27:13.433 read: IOPS=52, BW=52.4MiB/s (54.9MB/s)(566MiB/10801msec) 00:27:13.433 slat (usec): min=45, max=2002.1k, avg=18985.44, stdev=155449.34 00:27:13.433 clat (msec): min=50, max=8767, avg=2352.65, stdev=2684.13 00:27:13.433 lat (msec): min=560, max=8769, avg=2371.64, stdev=2694.25 00:27:13.433 clat percentiles (msec): 00:27:13.433 | 1.00th=[ 558], 5.00th=[ 567], 10.00th=[ 567], 20.00th=[ 567], 00:27:13.433 | 30.00th=[ 575], 40.00th=[ 575], 50.00th=[ 584], 60.00th=[ 667], 00:27:13.433 | 70.00th=[ 2970], 80.00th=[ 4665], 90.00th=[ 6409], 95.00th=[ 8658], 00:27:13.433 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:27:13.433 | 99.99th=[ 8792] 00:27:13.433 bw ( KiB/s): min=10240, max=231424, per=2.95%, avg=99619.78, stdev=88945.95, samples=9 00:27:13.433 iops : min= 10, max= 226, avg=97.22, stdev=86.76, samples=9 00:27:13.433 lat (msec) : 100=0.18%, 750=63.60%, >=2000=36.22% 00:27:13.433 cpu : usr=0.03%, sys=1.75%, ctx=598, majf=0, minf=32770 00:27:13.433 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.9% 00:27:13.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.433 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.433 issued rwts: total=566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.433 job0: (groupid=0, jobs=1): err= 0: pid=3423319: Sun Nov 17 16:03:57 2024 00:27:13.433 read: IOPS=15, BW=15.3MiB/s (16.0MB/s)(164MiB/10721msec) 00:27:13.433 slat (usec): min=63, max=2095.4k, avg=65018.25, stdev=313141.10 00:27:13.433 clat (msec): min=56, max=6052, avg=4412.41, stdev=1345.33 00:27:13.433 lat (msec): min=1747, max=6061, avg=4477.43, stdev=1292.60 00:27:13.433 clat percentiles (msec): 00:27:13.433 | 1.00th=[ 1737], 5.00th=[ 1821], 10.00th=[ 1989], 20.00th=[ 3842], 00:27:13.433 | 30.00th=[ 4463], 40.00th=[ 4665], 50.00th=[ 4799], 60.00th=[ 4933], 00:27:13.433 | 70.00th=[ 5201], 80.00th=[ 5537], 90.00th=[ 5738], 95.00th=[ 5873], 00:27:13.433 | 99.00th=[ 6074], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:27:13.433 | 99.99th=[ 6074] 00:27:13.433 bw ( KiB/s): min= 1519, max=43008, per=0.56%, avg=18811.75, stdev=17895.58, samples=4 00:27:13.433 iops : min= 1, max= 42, avg=18.25, stdev=17.63, samples=4 00:27:13.433 lat (msec) : 100=0.61%, 2000=12.20%, >=2000=87.20% 00:27:13.433 cpu : usr=0.00%, sys=1.14%, ctx=322, majf=0, minf=32769 00:27:13.433 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.9%, 16=9.8%, 32=19.5%, >=64=61.6% 00:27:13.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.433 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:27:13.433 issued rwts: total=164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.433 job1: (groupid=0, jobs=1): err= 0: pid=3423334: Sun Nov 17 16:03:57 2024 00:27:13.433 read: IOPS=17, BW=17.4MiB/s (18.2MB/s)(185MiB/10647msec) 00:27:13.433 slat (usec): min=639, max=2097.9k, avg=57155.64, stdev=294335.37 00:27:13.433 clat (msec): min=71, max=9872, avg=6712.50, stdev=3186.60 00:27:13.433 lat (msec): min=1649, max=9878, avg=6769.65, stdev=3151.78 00:27:13.433 clat percentiles (msec): 00:27:13.433 | 1.00th=[ 1636], 5.00th=[ 1670], 10.00th=[ 1687], 20.00th=[ 1737], 00:27:13.433 | 30.00th=[ 4279], 40.00th=[ 7752], 50.00th=[ 8658], 60.00th=[ 8926], 00:27:13.433 | 70.00th=[ 9060], 80.00th=[ 9329], 90.00th=[ 9597], 95.00th=[ 9731], 00:27:13.433 | 99.00th=[ 9866], 99.50th=[ 9866], 99.90th=[ 9866], 99.95th=[ 9866], 00:27:13.433 | 99.99th=[ 9866] 00:27:13.433 bw ( KiB/s): min= 2048, max=45056, per=0.50%, avg=16969.71, stdev=15938.48, samples=7 00:27:13.433 iops : min= 2, max= 44, avg=16.57, stdev=15.57, samples=7 00:27:13.433 lat (msec) : 100=0.54%, 2000=20.00%, >=2000=79.46% 00:27:13.433 cpu : usr=0.00%, sys=1.01%, ctx=473, majf=0, minf=32769 00:27:13.433 IO depths : 1=0.5%, 2=1.1%, 4=2.2%, 8=4.3%, 16=8.6%, 32=17.3%, >=64=65.9% 00:27:13.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.433 complete : 0=0.0%, 4=98.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.7% 00:27:13.433 issued rwts: total=185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.433 job1: (groupid=0, jobs=1): err= 0: pid=3423335: Sun Nov 17 16:03:57 2024 00:27:13.433 read: IOPS=21, BW=21.5MiB/s (22.6MB/s)(229MiB/10637msec) 00:27:13.433 slat (usec): min=104, max=2128.0k, avg=46431.74, stdev=237226.31 00:27:13.433 clat (msec): min=2, max=9220, avg=5396.06, stdev=2901.43 00:27:13.433 lat (msec): min=1065, max=9224, avg=5442.49, stdev=2889.50 00:27:13.433 clat percentiles (msec): 00:27:13.433 | 1.00th=[ 1070], 5.00th=[ 1217], 10.00th=[ 1485], 20.00th=[ 1754], 00:27:13.433 | 30.00th=[ 3675], 40.00th=[ 3775], 50.00th=[ 6275], 60.00th=[ 6409], 00:27:13.433 | 70.00th=[ 8020], 80.00th=[ 8221], 90.00th=[ 8926], 95.00th=[ 9060], 00:27:13.433 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:27:13.433 | 99.99th=[ 9194] 00:27:13.433 bw ( KiB/s): min= 2043, max=71680, per=0.77%, avg=25842.62, stdev=24042.28, samples=8 00:27:13.433 iops : min= 1, max= 70, avg=25.00, stdev=23.71, samples=8 00:27:13.433 lat (msec) : 4=0.44%, 2000=25.76%, >=2000=73.80% 00:27:13.433 cpu : usr=0.00%, sys=1.01%, ctx=617, majf=0, minf=32769 00:27:13.433 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=7.0%, 32=14.0%, >=64=72.5% 00:27:13.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.433 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:27:13.433 issued rwts: total=229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.433 job1: (groupid=0, jobs=1): err= 0: pid=3423336: Sun Nov 17 16:03:57 2024 00:27:13.433 read: IOPS=4, BW=4877KiB/s (4994kB/s)(51.0MiB/10709msec) 00:27:13.433 slat (usec): min=661, max=2131.0k, avg=208665.07, stdev=596859.73 00:27:13.433 clat (msec): min=66, max=10703, avg=7635.23, stdev=2685.20 00:27:13.433 lat (msec): min=2086, max=10708, avg=7843.89, stdev=2491.76 00:27:13.433 clat percentiles (msec): 00:27:13.433 | 1.00th=[ 66], 5.00th=[ 2123], 10.00th=[ 6007], 20.00th=[ 6074], 00:27:13.433 | 30.00th=[ 6141], 40.00th=[ 6141], 50.00th=[ 6342], 60.00th=[ 8557], 00:27:13.433 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:27:13.433 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:27:13.433 | 99.99th=[10671] 00:27:13.433 lat (msec) : 100=1.96%, >=2000=98.04% 00:27:13.433 cpu : usr=0.01%, sys=0.34%, ctx=126, majf=0, minf=13057 00:27:13.433 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:27:13.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.433 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:27:13.433 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.433 job1: (groupid=0, jobs=1): err= 0: pid=3423337: Sun Nov 17 16:03:57 2024 00:27:13.433 read: IOPS=56, BW=56.7MiB/s (59.5MB/s)(605MiB/10664msec) 00:27:13.433 slat (usec): min=39, max=2139.2k, avg=17502.62, stdev=148158.32 00:27:13.433 clat (msec): min=71, max=7667, avg=2032.26, stdev=2823.78 00:27:13.433 lat (msec): min=133, max=7668, avg=2049.77, stdev=2830.15 00:27:13.433 clat percentiles (msec): 00:27:13.433 | 1.00th=[ 133], 5.00th=[ 134], 10.00th=[ 136], 20.00th=[ 138], 00:27:13.433 | 30.00th=[ 239], 40.00th=[ 380], 50.00th=[ 439], 60.00th=[ 1003], 00:27:13.433 | 70.00th=[ 1552], 80.00th=[ 6208], 90.00th=[ 7617], 95.00th=[ 7617], 00:27:13.433 | 99.00th=[ 7684], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 7684], 00:27:13.433 | 99.99th=[ 7684] 00:27:13.433 bw ( KiB/s): min= 2048, max=526336, per=3.22%, avg=108772.00, stdev=176557.15, samples=9 00:27:13.433 iops : min= 2, max= 514, avg=106.22, stdev=172.42, samples=9 00:27:13.434 lat (msec) : 100=0.17%, 250=30.08%, 500=22.81%, 750=2.15%, 1000=4.46% 00:27:13.434 lat (msec) : 2000=18.84%, >=2000=21.49% 00:27:13.434 cpu : usr=0.03%, sys=1.22%, ctx=909, majf=0, minf=32769 00:27:13.434 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:27:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.434 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.434 issued rwts: total=605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.434 job1: (groupid=0, jobs=1): err= 0: pid=3423338: Sun Nov 17 16:03:57 2024 00:27:13.434 read: IOPS=15, BW=15.8MiB/s (16.6MB/s)(171MiB/10795msec) 00:27:13.434 slat (usec): min=589, max=2122.5k, avg=62734.67, stdev=307237.14 00:27:13.434 clat (msec): min=65, max=10325, avg=7450.17, stdev=3096.38 00:27:13.434 lat (msec): min=1781, max=10330, avg=7512.91, stdev=3047.83 00:27:13.434 clat percentiles (msec): 00:27:13.434 | 1.00th=[ 1787], 5.00th=[ 1905], 10.00th=[ 2022], 20.00th=[ 4044], 00:27:13.434 | 30.00th=[ 6342], 40.00th=[ 8658], 50.00th=[ 9060], 60.00th=[ 9329], 00:27:13.434 | 70.00th=[ 9463], 80.00th=[ 9866], 90.00th=[10134], 95.00th=[10268], 00:27:13.434 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:27:13.434 | 99.99th=[10268] 00:27:13.434 bw ( KiB/s): min= 2048, max=34816, per=0.43%, avg=14677.33, stdev=13172.11, samples=6 00:27:13.434 iops : min= 2, max= 34, avg=14.33, stdev=12.86, samples=6 00:27:13.434 lat (msec) : 100=0.58%, 2000=8.19%, >=2000=91.23% 00:27:13.434 cpu : usr=0.00%, sys=1.07%, ctx=518, majf=0, minf=32769 00:27:13.434 IO depths : 1=0.6%, 2=1.2%, 4=2.3%, 8=4.7%, 16=9.4%, 32=18.7%, >=64=63.2% 00:27:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.434 complete : 0=0.0%, 4=97.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.2% 00:27:13.434 issued rwts: total=171,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.434 job1: (groupid=0, jobs=1): err= 0: pid=3423339: Sun Nov 17 16:03:57 2024 00:27:13.434 read: IOPS=243, BW=244MiB/s (256MB/s)(2459MiB/10081msec) 00:27:13.434 slat (usec): min=39, max=2057.8k, avg=4079.84, stdev=42630.17 00:27:13.434 clat (msec): min=39, max=3935, avg=473.69, stdev=751.28 00:27:13.434 lat (msec): min=85, max=3941, avg=477.77, stdev=756.20 00:27:13.434 clat percentiles (msec): 00:27:13.434 | 1.00th=[ 138], 5.00th=[ 138], 10.00th=[ 138], 20.00th=[ 140], 00:27:13.434 | 30.00th=[ 140], 40.00th=[ 140], 50.00th=[ 142], 60.00th=[ 176], 00:27:13.434 | 70.00th=[ 439], 80.00th=[ 709], 90.00th=[ 835], 95.00th=[ 3004], 00:27:13.434 | 99.00th=[ 3641], 99.50th=[ 3775], 99.90th=[ 3910], 99.95th=[ 3910], 00:27:13.434 | 99.99th=[ 3943] 00:27:13.434 bw ( KiB/s): min=22528, max=937984, per=10.10%, avg=340968.93, stdev=356539.88, samples=14 00:27:13.434 iops : min= 22, max= 916, avg=332.93, stdev=348.21, samples=14 00:27:13.434 lat (msec) : 50=0.04%, 100=0.08%, 250=66.61%, 500=5.25%, 750=11.39% 00:27:13.434 lat (msec) : 1000=10.45%, 2000=0.98%, >=2000=5.21% 00:27:13.434 cpu : usr=0.07%, sys=2.65%, ctx=2603, majf=0, minf=32769 00:27:13.434 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:27:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.434 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.434 issued rwts: total=2459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.434 job1: (groupid=0, jobs=1): err= 0: pid=3423340: Sun Nov 17 16:03:57 2024 00:27:13.434 read: IOPS=6, BW=6785KiB/s (6947kB/s)(71.0MiB/10716msec) 00:27:13.434 slat (usec): min=831, max=2045.6k, avg=150879.61, stdev=491310.10 00:27:13.434 clat (msec): min=3, max=10714, avg=7328.64, stdev=3404.90 00:27:13.434 lat (msec): min=1779, max=10715, avg=7479.52, stdev=3311.73 00:27:13.434 clat percentiles (msec): 00:27:13.434 | 1.00th=[ 4], 5.00th=[ 1888], 10.00th=[ 1921], 20.00th=[ 4144], 00:27:13.434 | 30.00th=[ 4178], 40.00th=[ 6342], 50.00th=[ 8490], 60.00th=[10402], 00:27:13.434 | 70.00th=[10537], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:27:13.434 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:27:13.434 | 99.99th=[10671] 00:27:13.434 lat (msec) : 4=1.41%, 2000=9.86%, >=2000=88.73% 00:27:13.434 cpu : usr=0.00%, sys=0.51%, ctx=205, majf=0, minf=18177 00:27:13.434 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:27:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.434 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:13.434 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.434 job1: (groupid=0, jobs=1): err= 0: pid=3423341: Sun Nov 17 16:03:57 2024 00:27:13.434 read: IOPS=95, BW=95.6MiB/s (100MB/s)(1032MiB/10794msec) 00:27:13.434 slat (usec): min=42, max=2092.2k, avg=10390.04, stdev=109286.25 00:27:13.434 clat (msec): min=66, max=5126, avg=916.47, stdev=1044.53 00:27:13.434 lat (msec): min=418, max=5128, avg=926.86, stdev=1052.87 00:27:13.434 clat percentiles (msec): 00:27:13.434 | 1.00th=[ 418], 5.00th=[ 422], 10.00th=[ 422], 20.00th=[ 426], 00:27:13.434 | 30.00th=[ 430], 40.00th=[ 443], 50.00th=[ 558], 60.00th=[ 567], 00:27:13.434 | 70.00th=[ 600], 80.00th=[ 659], 90.00th=[ 2400], 95.00th=[ 2601], 00:27:13.434 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5134], 99.95th=[ 5134], 00:27:13.434 | 99.99th=[ 5134] 00:27:13.434 bw ( KiB/s): min=57344, max=305152, per=6.85%, avg=231424.00, stdev=82858.17, samples=8 00:27:13.434 iops : min= 56, max= 298, avg=226.00, stdev=80.92, samples=8 00:27:13.434 lat (msec) : 100=0.10%, 500=45.54%, 750=37.50%, 1000=0.58%, >=2000=16.28% 00:27:13.434 cpu : usr=0.06%, sys=1.83%, ctx=1050, majf=0, minf=32769 00:27:13.434 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:27:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.434 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.434 issued rwts: total=1032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.434 job1: (groupid=0, jobs=1): err= 0: pid=3423342: Sun Nov 17 16:03:57 2024 00:27:13.434 read: IOPS=1, BW=1918KiB/s (1964kB/s)(20.0MiB/10680msec) 00:27:13.434 slat (msec): min=2, max=2129, avg=530.27, stdev=898.27 00:27:13.434 clat (msec): min=74, max=10577, avg=6790.52, stdev=3524.59 00:27:13.434 lat (msec): min=2091, max=10679, avg=7320.79, stdev=3247.87 00:27:13.434 clat percentiles (msec): 00:27:13.434 | 1.00th=[ 74], 5.00th=[ 74], 10.00th=[ 2089], 20.00th=[ 2140], 00:27:13.434 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8557], 00:27:13.434 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10537], 95.00th=[10537], 00:27:13.434 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:27:13.434 | 99.99th=[10537] 00:27:13.434 lat (msec) : 100=5.00%, >=2000=95.00% 00:27:13.434 cpu : usr=0.00%, sys=0.17%, ctx=56, majf=0, minf=5121 00:27:13.434 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:27:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.434 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:27:13.434 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.434 job1: (groupid=0, jobs=1): err= 0: pid=3423343: Sun Nov 17 16:03:57 2024 00:27:13.434 read: IOPS=49, BW=49.7MiB/s (52.1MB/s)(532MiB/10714msec) 00:27:13.434 slat (usec): min=45, max=2112.8k, avg=19998.97, stdev=153853.55 00:27:13.434 clat (msec): min=71, max=8845, avg=2475.04, stdev=3183.29 00:27:13.434 lat (msec): min=546, max=8847, avg=2495.04, stdev=3192.32 00:27:13.434 clat percentiles (msec): 00:27:13.434 | 1.00th=[ 550], 5.00th=[ 550], 10.00th=[ 550], 20.00th=[ 550], 00:27:13.434 | 30.00th=[ 550], 40.00th=[ 558], 50.00th=[ 592], 60.00th=[ 625], 00:27:13.434 | 70.00th=[ 1267], 80.00th=[ 7148], 90.00th=[ 8658], 95.00th=[ 8658], 00:27:13.434 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:27:13.434 | 99.99th=[ 8792] 00:27:13.434 bw ( KiB/s): min= 1553, max=235520, per=2.23%, avg=75264.27, stdev=90974.23, samples=11 00:27:13.434 iops : min= 1, max= 230, avg=73.36, stdev=88.92, samples=11 00:27:13.434 lat (msec) : 100=0.19%, 750=65.60%, 1000=0.56%, 2000=6.58%, >=2000=27.07% 00:27:13.434 cpu : usr=0.05%, sys=1.33%, ctx=559, majf=0, minf=32769 00:27:13.434 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:27:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.434 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.434 issued rwts: total=532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.434 job1: (groupid=0, jobs=1): err= 0: pid=3423344: Sun Nov 17 16:03:57 2024 00:27:13.434 read: IOPS=48, BW=48.5MiB/s (50.8MB/s)(493MiB/10171msec) 00:27:13.434 slat (usec): min=42, max=2075.7k, avg=20394.65, stdev=159135.90 00:27:13.434 clat (msec): min=113, max=5164, avg=1831.15, stdev=1428.67 00:27:13.434 lat (msec): min=186, max=5185, avg=1851.55, stdev=1435.67 00:27:13.434 clat percentiles (msec): 00:27:13.434 | 1.00th=[ 192], 5.00th=[ 351], 10.00th=[ 609], 20.00th=[ 869], 00:27:13.434 | 30.00th=[ 902], 40.00th=[ 953], 50.00th=[ 1028], 60.00th=[ 1150], 00:27:13.434 | 70.00th=[ 3071], 80.00th=[ 3104], 90.00th=[ 3205], 95.00th=[ 5067], 00:27:13.434 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5134], 99.95th=[ 5134], 00:27:13.434 | 99.99th=[ 5134] 00:27:13.434 bw ( KiB/s): min= 6144, max=157696, per=2.77%, avg=93384.12, stdev=55235.16, samples=8 00:27:13.434 iops : min= 6, max= 154, avg=91.00, stdev=53.87, samples=8 00:27:13.434 lat (msec) : 250=3.45%, 500=5.07%, 750=4.87%, 1000=34.89%, 2000=16.63% 00:27:13.434 lat (msec) : >=2000=35.09% 00:27:13.434 cpu : usr=0.02%, sys=1.70%, ctx=447, majf=0, minf=32769 00:27:13.434 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.2%, 32=6.5%, >=64=87.2% 00:27:13.434 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.434 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.434 issued rwts: total=493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.434 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.434 job1: (groupid=0, jobs=1): err= 0: pid=3423345: Sun Nov 17 16:03:57 2024 00:27:13.434 read: IOPS=20, BW=20.5MiB/s (21.4MB/s)(218MiB/10658msec) 00:27:13.434 slat (usec): min=647, max=2109.1k, avg=48869.88, stdev=243519.53 00:27:13.434 clat (msec): min=2, max=10586, avg=5616.92, stdev=3211.15 00:27:13.434 lat (msec): min=1240, max=10657, avg=5665.79, stdev=3195.01 00:27:13.434 clat percentiles (msec): 00:27:13.435 | 1.00th=[ 1267], 5.00th=[ 1318], 10.00th=[ 1351], 20.00th=[ 1519], 00:27:13.435 | 30.00th=[ 3406], 40.00th=[ 3608], 50.00th=[ 6141], 60.00th=[ 7752], 00:27:13.435 | 70.00th=[ 8658], 80.00th=[ 8926], 90.00th=[ 9060], 95.00th=[ 9194], 00:27:13.435 | 99.00th=[10537], 99.50th=[10537], 99.90th=[10537], 99.95th=[10537], 00:27:13.435 | 99.99th=[10537] 00:27:13.435 bw ( KiB/s): min= 2048, max=57344, per=0.78%, avg=26339.00, stdev=17029.73, samples=7 00:27:13.435 iops : min= 2, max= 56, avg=25.71, stdev=16.63, samples=7 00:27:13.435 lat (msec) : 4=0.46%, 2000=27.06%, >=2000=72.48% 00:27:13.435 cpu : usr=0.00%, sys=0.97%, ctx=624, majf=0, minf=32769 00:27:13.435 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.3%, 32=14.7%, >=64=71.1% 00:27:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.435 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:27:13.435 issued rwts: total=218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.435 job1: (groupid=0, jobs=1): err= 0: pid=3423347: Sun Nov 17 16:03:57 2024 00:27:13.435 read: IOPS=50, BW=50.0MiB/s (52.5MB/s)(536MiB/10710msec) 00:27:13.435 slat (usec): min=64, max=2077.3k, avg=19968.52, stdev=164551.35 00:27:13.435 clat (msec): min=2, max=8710, avg=2453.29, stdev=2786.82 00:27:13.435 lat (msec): min=574, max=8710, avg=2473.26, stdev=2796.77 00:27:13.435 clat percentiles (msec): 00:27:13.435 | 1.00th=[ 575], 5.00th=[ 575], 10.00th=[ 584], 20.00th=[ 584], 00:27:13.435 | 30.00th=[ 592], 40.00th=[ 600], 50.00th=[ 609], 60.00th=[ 735], 00:27:13.435 | 70.00th=[ 2735], 80.00th=[ 6208], 90.00th=[ 6946], 95.00th=[ 8658], 00:27:13.435 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:27:13.435 | 99.99th=[ 8658] 00:27:13.435 bw ( KiB/s): min= 8192, max=221184, per=2.48%, avg=83558.40, stdev=84412.50, samples=10 00:27:13.435 iops : min= 8, max= 216, avg=81.60, stdev=82.43, samples=10 00:27:13.435 lat (msec) : 4=0.19%, 750=60.07%, 2000=6.53%, >=2000=33.21% 00:27:13.435 cpu : usr=0.01%, sys=1.85%, ctx=518, majf=0, minf=32769 00:27:13.435 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:27:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.435 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.435 issued rwts: total=536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.435 job2: (groupid=0, jobs=1): err= 0: pid=3423356: Sun Nov 17 16:03:57 2024 00:27:13.435 read: IOPS=42, BW=42.6MiB/s (44.6MB/s)(430MiB/10105msec) 00:27:13.435 slat (usec): min=48, max=2109.2k, avg=23259.46, stdev=140407.18 00:27:13.435 clat (msec): min=100, max=6975, avg=2160.17, stdev=1915.95 00:27:13.435 lat (msec): min=139, max=6979, avg=2183.43, stdev=1926.52 00:27:13.435 clat percentiles (msec): 00:27:13.435 | 1.00th=[ 150], 5.00th=[ 464], 10.00th=[ 835], 20.00th=[ 1020], 00:27:13.435 | 30.00th=[ 1083], 40.00th=[ 1351], 50.00th=[ 1569], 60.00th=[ 1687], 00:27:13.435 | 70.00th=[ 1871], 80.00th=[ 2433], 90.00th=[ 6611], 95.00th=[ 6879], 00:27:13.435 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:27:13.435 | 99.99th=[ 6946] 00:27:13.435 bw ( KiB/s): min=24576, max=141312, per=2.30%, avg=77542.88, stdev=44794.52, samples=8 00:27:13.435 iops : min= 24, max= 138, avg=75.50, stdev=43.91, samples=8 00:27:13.435 lat (msec) : 250=2.33%, 500=4.65%, 750=2.56%, 1000=9.53%, 2000=54.19% 00:27:13.435 lat (msec) : >=2000=26.74% 00:27:13.435 cpu : usr=0.02%, sys=1.44%, ctx=764, majf=0, minf=32769 00:27:13.435 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.4%, >=64=85.3% 00:27:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.435 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.435 issued rwts: total=430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.435 job2: (groupid=0, jobs=1): err= 0: pid=3423357: Sun Nov 17 16:03:57 2024 00:27:13.435 read: IOPS=31, BW=31.8MiB/s (33.4MB/s)(320MiB/10055msec) 00:27:13.435 slat (usec): min=41, max=2138.0k, avg=31248.11, stdev=195759.08 00:27:13.435 clat (msec): min=53, max=8478, avg=2109.18, stdev=2065.48 00:27:13.435 lat (msec): min=56, max=8493, avg=2140.43, stdev=2096.90 00:27:13.435 clat percentiles (msec): 00:27:13.435 | 1.00th=[ 59], 5.00th=[ 124], 10.00th=[ 192], 20.00th=[ 330], 00:27:13.435 | 30.00th=[ 472], 40.00th=[ 709], 50.00th=[ 911], 60.00th=[ 3373], 00:27:13.435 | 70.00th=[ 3742], 80.00th=[ 4212], 90.00th=[ 4396], 95.00th=[ 6342], 00:27:13.435 | 99.00th=[ 8423], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:27:13.435 | 99.99th=[ 8490] 00:27:13.435 bw ( KiB/s): min= 6131, max=235081, per=2.88%, avg=97167.00, stdev=97365.55, samples=4 00:27:13.435 iops : min= 5, max= 229, avg=94.50, stdev=95.12, samples=4 00:27:13.435 lat (msec) : 100=3.44%, 250=9.69%, 500=18.75%, 750=10.31%, 1000=9.69% 00:27:13.435 lat (msec) : 2000=7.50%, >=2000=40.62% 00:27:13.435 cpu : usr=0.02%, sys=1.22%, ctx=627, majf=0, minf=32769 00:27:13.435 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=5.0%, 32=10.0%, >=64=80.3% 00:27:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.435 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:13.435 issued rwts: total=320,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.435 job2: (groupid=0, jobs=1): err= 0: pid=3423358: Sun Nov 17 16:03:57 2024 00:27:13.435 read: IOPS=75, BW=76.0MiB/s (79.7MB/s)(767MiB/10094msec) 00:27:13.435 slat (usec): min=40, max=2104.7k, avg=13037.15, stdev=104446.87 00:27:13.435 clat (msec): min=89, max=5151, avg=1072.98, stdev=810.30 00:27:13.435 lat (msec): min=94, max=5155, avg=1086.02, stdev=823.08 00:27:13.435 clat percentiles (msec): 00:27:13.435 | 1.00th=[ 109], 5.00th=[ 313], 10.00th=[ 600], 20.00th=[ 852], 00:27:13.435 | 30.00th=[ 919], 40.00th=[ 961], 50.00th=[ 978], 60.00th=[ 995], 00:27:13.435 | 70.00th=[ 1053], 80.00th=[ 1083], 90.00th=[ 1116], 95.00th=[ 1217], 00:27:13.435 | 99.00th=[ 5134], 99.50th=[ 5134], 99.90th=[ 5134], 99.95th=[ 5134], 00:27:13.435 | 99.99th=[ 5134] 00:27:13.435 bw ( KiB/s): min=96256, max=157696, per=3.88%, avg=131046.20, stdev=17001.91, samples=10 00:27:13.435 iops : min= 94, max= 154, avg=127.90, stdev=16.62, samples=10 00:27:13.435 lat (msec) : 100=0.65%, 250=2.74%, 500=4.69%, 750=5.22%, 1000=47.98% 00:27:13.435 lat (msec) : 2000=34.81%, >=2000=3.91% 00:27:13.435 cpu : usr=0.07%, sys=1.57%, ctx=692, majf=0, minf=32769 00:27:13.435 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:27:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.435 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.435 issued rwts: total=767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.435 job2: (groupid=0, jobs=1): err= 0: pid=3423359: Sun Nov 17 16:03:57 2024 00:27:13.435 read: IOPS=6, BW=6226KiB/s (6375kB/s)(65.0MiB/10691msec) 00:27:13.435 slat (usec): min=987, max=2104.8k, avg=163270.82, stdev=476223.19 00:27:13.435 clat (msec): min=77, max=10685, avg=4013.20, stdev=2839.51 00:27:13.435 lat (msec): min=1252, max=10690, avg=4176.48, stdev=2913.83 00:27:13.435 clat percentiles (msec): 00:27:13.435 | 1.00th=[ 79], 5.00th=[ 1267], 10.00th=[ 1334], 20.00th=[ 1418], 00:27:13.435 | 30.00th=[ 1653], 40.00th=[ 1905], 50.00th=[ 3977], 60.00th=[ 4212], 00:27:13.435 | 70.00th=[ 6208], 80.00th=[ 6409], 90.00th=[ 8557], 95.00th=[10537], 00:27:13.435 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:27:13.435 | 99.99th=[10671] 00:27:13.435 lat (msec) : 100=1.54%, 2000=40.00%, >=2000=58.46% 00:27:13.435 cpu : usr=0.00%, sys=0.41%, ctx=246, majf=0, minf=16641 00:27:13.435 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.3%, 16=24.6%, 32=49.2%, >=64=3.1% 00:27:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.435 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:13.435 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.435 job2: (groupid=0, jobs=1): err= 0: pid=3423360: Sun Nov 17 16:03:57 2024 00:27:13.435 read: IOPS=10, BW=10.5MiB/s (11.0MB/s)(112MiB/10629msec) 00:27:13.435 slat (usec): min=513, max=2084.5k, avg=89824.53, stdev=376889.74 00:27:13.435 clat (msec): min=567, max=10626, avg=3554.11, stdev=2960.73 00:27:13.435 lat (msec): min=641, max=10628, avg=3643.93, stdev=3021.30 00:27:13.435 clat percentiles (msec): 00:27:13.435 | 1.00th=[ 642], 5.00th=[ 676], 10.00th=[ 835], 20.00th=[ 1028], 00:27:13.435 | 30.00th=[ 1250], 40.00th=[ 1620], 50.00th=[ 1838], 60.00th=[ 4178], 00:27:13.435 | 70.00th=[ 6208], 80.00th=[ 6342], 90.00th=[ 8490], 95.00th=[10537], 00:27:13.435 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:27:13.435 | 99.99th=[10671] 00:27:13.435 lat (msec) : 750=6.25%, 1000=10.71%, 2000=37.50%, >=2000=45.54% 00:27:13.435 cpu : usr=0.00%, sys=0.73%, ctx=233, majf=0, minf=28673 00:27:13.435 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.1%, 16=14.3%, 32=28.6%, >=64=43.8% 00:27:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.435 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:13.435 issued rwts: total=112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.435 job2: (groupid=0, jobs=1): err= 0: pid=3423361: Sun Nov 17 16:03:57 2024 00:27:13.435 read: IOPS=20, BW=20.3MiB/s (21.3MB/s)(218MiB/10719msec) 00:27:13.435 slat (usec): min=481, max=2019.7k, avg=48804.86, stdev=225534.63 00:27:13.435 clat (msec): min=77, max=10695, avg=5304.98, stdev=1396.74 00:27:13.435 lat (msec): min=2097, max=10696, avg=5353.79, stdev=1371.39 00:27:13.435 clat percentiles (msec): 00:27:13.435 | 1.00th=[ 2123], 5.00th=[ 4077], 10.00th=[ 4329], 20.00th=[ 4732], 00:27:13.435 | 30.00th=[ 4799], 40.00th=[ 4866], 50.00th=[ 5000], 60.00th=[ 5134], 00:27:13.435 | 70.00th=[ 5201], 80.00th=[ 5805], 90.00th=[ 6477], 95.00th=[ 8658], 00:27:13.435 | 99.00th=[10537], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:27:13.435 | 99.99th=[10671] 00:27:13.435 bw ( KiB/s): min= 1519, max=63488, per=0.79%, avg=26546.57, stdev=23947.64, samples=7 00:27:13.435 iops : min= 1, max= 62, avg=25.71, stdev=23.61, samples=7 00:27:13.435 lat (msec) : 100=0.46%, >=2000=99.54% 00:27:13.435 cpu : usr=0.00%, sys=1.25%, ctx=627, majf=0, minf=32769 00:27:13.435 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.7%, 16=7.3%, 32=14.7%, >=64=71.1% 00:27:13.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.435 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:27:13.436 issued rwts: total=218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.436 job2: (groupid=0, jobs=1): err= 0: pid=3423362: Sun Nov 17 16:03:57 2024 00:27:13.436 read: IOPS=14, BW=14.4MiB/s (15.2MB/s)(154MiB/10658msec) 00:27:13.436 slat (usec): min=578, max=2091.2k, avg=68800.80, stdev=295336.06 00:27:13.436 clat (msec): min=61, max=10621, avg=7578.83, stdev=2379.67 00:27:13.436 lat (msec): min=2073, max=10629, avg=7647.63, stdev=2303.88 00:27:13.436 clat percentiles (msec): 00:27:13.436 | 1.00th=[ 2072], 5.00th=[ 2140], 10.00th=[ 2668], 20.00th=[ 6208], 00:27:13.436 | 30.00th=[ 7953], 40.00th=[ 8087], 50.00th=[ 8288], 60.00th=[ 8490], 00:27:13.436 | 70.00th=[ 8926], 80.00th=[ 9329], 90.00th=[ 9731], 95.00th=[ 9866], 00:27:13.436 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:27:13.436 | 99.99th=[10671] 00:27:13.436 bw ( KiB/s): min= 2048, max=18432, per=0.27%, avg=9216.67, stdev=7957.55, samples=6 00:27:13.436 iops : min= 2, max= 18, avg= 9.00, stdev= 7.77, samples=6 00:27:13.436 lat (msec) : 100=0.65%, >=2000=99.35% 00:27:13.436 cpu : usr=0.00%, sys=1.30%, ctx=465, majf=0, minf=32769 00:27:13.436 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.2%, 16=10.4%, 32=20.8%, >=64=59.1% 00:27:13.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.436 complete : 0=0.0%, 4=96.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.6% 00:27:13.436 issued rwts: total=154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.436 job2: (groupid=0, jobs=1): err= 0: pid=3423363: Sun Nov 17 16:03:57 2024 00:27:13.436 read: IOPS=44, BW=44.8MiB/s (47.0MB/s)(453MiB/10102msec) 00:27:13.436 slat (usec): min=56, max=2053.2k, avg=22113.82, stdev=161799.72 00:27:13.436 clat (msec): min=81, max=5372, avg=2026.27, stdev=1621.54 00:27:13.436 lat (msec): min=114, max=5377, avg=2048.39, stdev=1628.10 00:27:13.436 clat percentiles (msec): 00:27:13.436 | 1.00th=[ 124], 5.00th=[ 215], 10.00th=[ 334], 20.00th=[ 592], 00:27:13.436 | 30.00th=[ 1011], 40.00th=[ 1070], 50.00th=[ 1150], 60.00th=[ 1267], 00:27:13.436 | 70.00th=[ 3339], 80.00th=[ 3473], 90.00th=[ 5134], 95.00th=[ 5269], 00:27:13.436 | 99.00th=[ 5336], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:27:13.436 | 99.99th=[ 5403] 00:27:13.436 bw ( KiB/s): min= 6144, max=181908, per=2.81%, avg=95009.14, stdev=55493.84, samples=7 00:27:13.436 iops : min= 6, max= 177, avg=92.57, stdev=54.05, samples=7 00:27:13.436 lat (msec) : 100=0.22%, 250=6.62%, 500=9.49%, 750=8.39%, 1000=4.64% 00:27:13.436 lat (msec) : 2000=31.13%, >=2000=39.51% 00:27:13.436 cpu : usr=0.02%, sys=1.50%, ctx=631, majf=0, minf=32769 00:27:13.436 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.1% 00:27:13.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.436 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.436 issued rwts: total=453,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.436 job2: (groupid=0, jobs=1): err= 0: pid=3423364: Sun Nov 17 16:03:57 2024 00:27:13.436 read: IOPS=60, BW=60.9MiB/s (63.8MB/s)(648MiB/10643msec) 00:27:13.436 slat (usec): min=41, max=1816.9k, avg=16319.35, stdev=94380.70 00:27:13.436 clat (msec): min=63, max=6181, avg=1992.77, stdev=1715.77 00:27:13.436 lat (msec): min=689, max=6262, avg=2009.09, stdev=1723.23 00:27:13.436 clat percentiles (msec): 00:27:13.436 | 1.00th=[ 693], 5.00th=[ 693], 10.00th=[ 709], 20.00th=[ 827], 00:27:13.436 | 30.00th=[ 835], 40.00th=[ 844], 50.00th=[ 885], 60.00th=[ 1418], 00:27:13.436 | 70.00th=[ 2232], 80.00th=[ 3473], 90.00th=[ 5336], 95.00th=[ 5537], 00:27:13.436 | 99.00th=[ 5873], 99.50th=[ 5940], 99.90th=[ 6208], 99.95th=[ 6208], 00:27:13.436 | 99.99th=[ 6208] 00:27:13.436 bw ( KiB/s): min= 2052, max=161792, per=2.11%, avg=71119.00, stdev=61171.20, samples=15 00:27:13.436 iops : min= 2, max= 158, avg=69.27, stdev=59.85, samples=15 00:27:13.436 lat (msec) : 100=0.15%, 750=9.88%, 1000=49.07%, 2000=6.64%, >=2000=34.26% 00:27:13.436 cpu : usr=0.05%, sys=1.15%, ctx=829, majf=0, minf=32769 00:27:13.436 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.3% 00:27:13.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.436 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.436 issued rwts: total=648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.436 job2: (groupid=0, jobs=1): err= 0: pid=3423365: Sun Nov 17 16:03:57 2024 00:27:13.436 read: IOPS=8, BW=9130KiB/s (9349kB/s)(90.0MiB/10094msec) 00:27:13.436 slat (usec): min=395, max=2102.4k, avg=111875.64, stdev=412135.20 00:27:13.436 clat (msec): min=24, max=10088, avg=3414.87, stdev=3679.89 00:27:13.436 lat (msec): min=94, max=10093, avg=3526.75, stdev=3728.39 00:27:13.436 clat percentiles (msec): 00:27:13.436 | 1.00th=[ 25], 5.00th=[ 142], 10.00th=[ 266], 20.00th=[ 397], 00:27:13.436 | 30.00th=[ 793], 40.00th=[ 961], 50.00th=[ 1217], 60.00th=[ 1502], 00:27:13.436 | 70.00th=[ 5738], 80.00th=[ 5873], 90.00th=[10134], 95.00th=[10134], 00:27:13.436 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:27:13.436 | 99.99th=[10134] 00:27:13.436 lat (msec) : 50=1.11%, 100=1.11%, 250=4.44%, 500=15.56%, 750=6.67% 00:27:13.436 lat (msec) : 1000=12.22%, 2000=21.11%, >=2000=37.78% 00:27:13.436 cpu : usr=0.00%, sys=0.63%, ctx=253, majf=0, minf=23041 00:27:13.436 IO depths : 1=1.1%, 2=2.2%, 4=4.4%, 8=8.9%, 16=17.8%, 32=35.6%, >=64=30.0% 00:27:13.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.436 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:13.436 issued rwts: total=90,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.436 job2: (groupid=0, jobs=1): err= 0: pid=3423367: Sun Nov 17 16:03:57 2024 00:27:13.436 read: IOPS=21, BW=21.5MiB/s (22.5MB/s)(216MiB/10050msec) 00:27:13.436 slat (usec): min=566, max=2127.1k, avg=46406.43, stdev=240372.04 00:27:13.436 clat (msec): min=24, max=8773, avg=5383.30, stdev=3519.51 00:27:13.436 lat (msec): min=95, max=8816, avg=5429.70, stdev=3510.52 00:27:13.436 clat percentiles (msec): 00:27:13.436 | 1.00th=[ 102], 5.00th=[ 275], 10.00th=[ 502], 20.00th=[ 1036], 00:27:13.436 | 30.00th=[ 1871], 40.00th=[ 4077], 50.00th=[ 8087], 60.00th=[ 8288], 00:27:13.436 | 70.00th=[ 8356], 80.00th=[ 8557], 90.00th=[ 8658], 95.00th=[ 8792], 00:27:13.436 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:27:13.436 | 99.99th=[ 8792] 00:27:13.436 bw ( KiB/s): min= 2048, max=36864, per=0.65%, avg=21845.33, stdev=13502.31, samples=6 00:27:13.436 iops : min= 2, max= 36, avg=21.33, stdev=13.19, samples=6 00:27:13.436 lat (msec) : 50=0.46%, 100=0.46%, 250=3.24%, 500=5.56%, 750=4.63% 00:27:13.436 lat (msec) : 1000=4.63%, 2000=17.59%, >=2000=63.43% 00:27:13.436 cpu : usr=0.02%, sys=1.18%, ctx=521, majf=0, minf=32769 00:27:13.436 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.4%, 32=14.8%, >=64=70.8% 00:27:13.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.436 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:27:13.436 issued rwts: total=216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.436 job2: (groupid=0, jobs=1): err= 0: pid=3423368: Sun Nov 17 16:03:57 2024 00:27:13.436 read: IOPS=81, BW=81.2MiB/s (85.2MB/s)(817MiB/10057msec) 00:27:13.436 slat (usec): min=41, max=2045.7k, avg=12234.27, stdev=99966.86 00:27:13.436 clat (msec): min=54, max=4888, avg=942.81, stdev=534.20 00:27:13.436 lat (msec): min=57, max=4911, avg=955.05, stdev=551.81 00:27:13.436 clat percentiles (msec): 00:27:13.436 | 1.00th=[ 153], 5.00th=[ 430], 10.00th=[ 701], 20.00th=[ 802], 00:27:13.436 | 30.00th=[ 852], 40.00th=[ 877], 50.00th=[ 902], 60.00th=[ 927], 00:27:13.436 | 70.00th=[ 969], 80.00th=[ 1003], 90.00th=[ 1133], 95.00th=[ 1150], 00:27:13.436 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:27:13.436 | 99.99th=[ 4866] 00:27:13.436 bw ( KiB/s): min=22528, max=212566, per=3.87%, avg=130796.30, stdev=48111.58, samples=10 00:27:13.436 iops : min= 22, max= 207, avg=127.60, stdev=46.86, samples=10 00:27:13.436 lat (msec) : 100=0.49%, 250=1.96%, 500=3.79%, 750=9.18%, 1000=64.50% 00:27:13.436 lat (msec) : 2000=17.75%, >=2000=2.33% 00:27:13.436 cpu : usr=0.06%, sys=1.95%, ctx=702, majf=0, minf=32769 00:27:13.436 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:27:13.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.436 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.436 issued rwts: total=817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.436 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.436 job2: (groupid=0, jobs=1): err= 0: pid=3423369: Sun Nov 17 16:03:57 2024 00:27:13.436 read: IOPS=29, BW=29.6MiB/s (31.0MB/s)(301MiB/10166msec) 00:27:13.436 slat (usec): min=53, max=2118.9k, avg=33386.95, stdev=199485.69 00:27:13.436 clat (msec): min=114, max=9882, avg=3409.95, stdev=2661.65 00:27:13.436 lat (msec): min=196, max=9916, avg=3443.33, stdev=2673.14 00:27:13.436 clat percentiles (msec): 00:27:13.436 | 1.00th=[ 232], 5.00th=[ 502], 10.00th=[ 642], 20.00th=[ 726], 00:27:13.437 | 30.00th=[ 860], 40.00th=[ 1116], 50.00th=[ 3138], 60.00th=[ 5537], 00:27:13.437 | 70.00th=[ 6342], 80.00th=[ 6544], 90.00th=[ 6611], 95.00th=[ 6678], 00:27:13.437 | 99.00th=[ 6745], 99.50th=[ 7953], 99.90th=[ 9866], 99.95th=[ 9866], 00:27:13.437 | 99.99th=[ 9866] 00:27:13.437 bw ( KiB/s): min= 4096, max=139264, per=1.31%, avg=44278.25, stdev=50653.34, samples=8 00:27:13.437 iops : min= 4, max= 136, avg=43.12, stdev=49.48, samples=8 00:27:13.437 lat (msec) : 250=1.66%, 500=3.32%, 750=16.61%, 1000=17.28%, 2000=7.97% 00:27:13.437 lat (msec) : >=2000=53.16% 00:27:13.437 cpu : usr=0.01%, sys=1.52%, ctx=508, majf=0, minf=32769 00:27:13.437 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.3%, 32=10.6%, >=64=79.1% 00:27:13.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.437 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:27:13.437 issued rwts: total=301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.437 job3: (groupid=0, jobs=1): err= 0: pid=3423374: Sun Nov 17 16:03:57 2024 00:27:13.437 read: IOPS=56, BW=56.1MiB/s (58.8MB/s)(565MiB/10068msec) 00:27:13.437 slat (usec): min=40, max=2072.6k, avg=17764.80, stdev=135909.71 00:27:13.437 clat (msec): min=26, max=7974, avg=2174.97, stdev=2830.73 00:27:13.437 lat (msec): min=68, max=7976, avg=2192.73, stdev=2838.92 00:27:13.437 clat percentiles (msec): 00:27:13.437 | 1.00th=[ 165], 5.00th=[ 558], 10.00th=[ 567], 20.00th=[ 567], 00:27:13.437 | 30.00th=[ 567], 40.00th=[ 575], 50.00th=[ 584], 60.00th=[ 634], 00:27:13.437 | 70.00th=[ 885], 80.00th=[ 6812], 90.00th=[ 7617], 95.00th=[ 7819], 00:27:13.437 | 99.00th=[ 7953], 99.50th=[ 7953], 99.90th=[ 7953], 99.95th=[ 7953], 00:27:13.437 | 99.99th=[ 7953] 00:27:13.437 bw ( KiB/s): min= 8175, max=227328, per=2.65%, avg=89364.10, stdev=84090.04, samples=10 00:27:13.437 iops : min= 7, max= 222, avg=87.00, stdev=82.33, samples=10 00:27:13.437 lat (msec) : 50=0.18%, 100=0.35%, 250=1.24%, 500=1.95%, 750=62.12% 00:27:13.437 lat (msec) : 1000=6.37%, 2000=3.72%, >=2000=24.07% 00:27:13.437 cpu : usr=0.10%, sys=1.58%, ctx=766, majf=0, minf=32769 00:27:13.437 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.7%, >=64=88.8% 00:27:13.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.437 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.437 issued rwts: total=565,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.437 job3: (groupid=0, jobs=1): err= 0: pid=3423375: Sun Nov 17 16:03:57 2024 00:27:13.437 read: IOPS=21, BW=21.7MiB/s (22.8MB/s)(220MiB/10120msec) 00:27:13.437 slat (usec): min=71, max=2081.3k, avg=45455.27, stdev=262899.30 00:27:13.437 clat (msec): min=117, max=9271, avg=2557.40, stdev=2866.74 00:27:13.437 lat (msec): min=198, max=9273, avg=2602.85, stdev=2898.88 00:27:13.437 clat percentiles (msec): 00:27:13.437 | 1.00th=[ 203], 5.00th=[ 241], 10.00th=[ 363], 20.00th=[ 531], 00:27:13.437 | 30.00th=[ 768], 40.00th=[ 936], 50.00th=[ 1099], 60.00th=[ 1183], 00:27:13.437 | 70.00th=[ 3239], 80.00th=[ 5134], 90.00th=[ 7148], 95.00th=[ 9194], 00:27:13.437 | 99.00th=[ 9194], 99.50th=[ 9194], 99.90th=[ 9329], 99.95th=[ 9329], 00:27:13.437 | 99.99th=[ 9329] 00:27:13.437 bw ( KiB/s): min=94208, max=96256, per=2.82%, avg=95232.00, stdev=1448.15, samples=2 00:27:13.437 iops : min= 92, max= 94, avg=93.00, stdev= 1.41, samples=2 00:27:13.437 lat (msec) : 250=6.36%, 500=10.00%, 750=12.27%, 1000=14.09%, 2000=25.45% 00:27:13.437 lat (msec) : >=2000=31.82% 00:27:13.437 cpu : usr=0.04%, sys=1.36%, ctx=201, majf=0, minf=32769 00:27:13.437 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.3%, 32=14.5%, >=64=71.4% 00:27:13.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.437 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:27:13.437 issued rwts: total=220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.437 job3: (groupid=0, jobs=1): err= 0: pid=3423376: Sun Nov 17 16:03:57 2024 00:27:13.437 read: IOPS=42, BW=42.7MiB/s (44.8MB/s)(428MiB/10018msec) 00:27:13.437 slat (usec): min=41, max=2131.3k, avg=23359.22, stdev=168034.64 00:27:13.437 clat (msec): min=16, max=5006, avg=1791.15, stdev=1466.48 00:27:13.437 lat (msec): min=17, max=5007, avg=1814.50, stdev=1474.25 00:27:13.437 clat percentiles (msec): 00:27:13.437 | 1.00th=[ 24], 5.00th=[ 359], 10.00th=[ 693], 20.00th=[ 718], 00:27:13.437 | 30.00th=[ 768], 40.00th=[ 810], 50.00th=[ 827], 60.00th=[ 969], 00:27:13.437 | 70.00th=[ 2970], 80.00th=[ 3742], 90.00th=[ 3943], 95.00th=[ 4010], 00:27:13.437 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:27:13.437 | 99.99th=[ 5000] 00:27:13.437 bw ( KiB/s): min= 6144, max=188416, per=2.38%, avg=80457.14, stdev=76187.38, samples=7 00:27:13.437 iops : min= 6, max= 184, avg=78.57, stdev=74.40, samples=7 00:27:13.437 lat (msec) : 20=0.70%, 50=1.87%, 100=0.47%, 250=1.17%, 500=1.87% 00:27:13.437 lat (msec) : 750=19.86%, 1000=34.81%, 2000=4.67%, >=2000=34.58% 00:27:13.437 cpu : usr=0.01%, sys=1.50%, ctx=596, majf=0, minf=32769 00:27:13.437 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.5%, >=64=85.3% 00:27:13.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.437 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.437 issued rwts: total=428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.437 job3: (groupid=0, jobs=1): err= 0: pid=3423377: Sun Nov 17 16:03:57 2024 00:27:13.437 read: IOPS=44, BW=44.3MiB/s (46.4MB/s)(444MiB/10029msec) 00:27:13.437 slat (usec): min=52, max=2106.0k, avg=22521.28, stdev=169276.79 00:27:13.437 clat (msec): min=26, max=9991, avg=2719.14, stdev=3303.00 00:27:13.437 lat (msec): min=29, max=9994, avg=2741.66, stdev=3310.38 00:27:13.437 clat percentiles (msec): 00:27:13.437 | 1.00th=[ 74], 5.00th=[ 542], 10.00th=[ 567], 20.00th=[ 567], 00:27:13.437 | 30.00th=[ 575], 40.00th=[ 575], 50.00th=[ 575], 60.00th=[ 609], 00:27:13.437 | 70.00th=[ 2735], 80.00th=[ 7953], 90.00th=[ 8087], 95.00th=[ 8221], 00:27:13.437 | 99.00th=[ 8288], 99.50th=[ 9866], 99.90th=[10000], 99.95th=[10000], 00:27:13.437 | 99.99th=[10000] 00:27:13.437 bw ( KiB/s): min= 2043, max=221184, per=2.00%, avg=67355.89, stdev=89085.19, samples=9 00:27:13.437 iops : min= 1, max= 216, avg=65.67, stdev=87.09, samples=9 00:27:13.437 lat (msec) : 50=0.68%, 100=0.68%, 250=1.35%, 500=2.03%, 750=57.43% 00:27:13.437 lat (msec) : 1000=4.73%, 2000=2.70%, >=2000=30.41% 00:27:13.437 cpu : usr=0.02%, sys=1.30%, ctx=684, majf=0, minf=32769 00:27:13.437 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:27:13.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.437 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.437 issued rwts: total=444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.437 job3: (groupid=0, jobs=1): err= 0: pid=3423378: Sun Nov 17 16:03:57 2024 00:27:13.437 read: IOPS=79, BW=79.8MiB/s (83.7MB/s)(809MiB/10134msec) 00:27:13.437 slat (usec): min=41, max=2035.0k, avg=12366.81, stdev=99684.40 00:27:13.437 clat (msec): min=122, max=4943, avg=1125.67, stdev=900.55 00:27:13.437 lat (msec): min=136, max=4947, avg=1138.03, stdev=909.66 00:27:13.437 clat percentiles (msec): 00:27:13.437 | 1.00th=[ 255], 5.00th=[ 558], 10.00th=[ 718], 20.00th=[ 818], 00:27:13.437 | 30.00th=[ 852], 40.00th=[ 885], 50.00th=[ 911], 60.00th=[ 944], 00:27:13.437 | 70.00th=[ 978], 80.00th=[ 1036], 90.00th=[ 1301], 95.00th=[ 2937], 00:27:13.437 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 4933], 99.95th=[ 4933], 00:27:13.437 | 99.99th=[ 4933] 00:27:13.437 bw ( KiB/s): min=51200, max=188416, per=3.76%, avg=126960.00, stdev=37086.74, samples=11 00:27:13.437 iops : min= 50, max= 184, avg=123.91, stdev=36.31, samples=11 00:27:13.437 lat (msec) : 250=0.99%, 500=2.35%, 750=9.39%, 1000=62.30%, 2000=18.67% 00:27:13.437 lat (msec) : >=2000=6.30% 00:27:13.437 cpu : usr=0.05%, sys=2.13%, ctx=743, majf=0, minf=32769 00:27:13.437 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:27:13.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.437 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.437 issued rwts: total=809,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.437 job3: (groupid=0, jobs=1): err= 0: pid=3423379: Sun Nov 17 16:03:57 2024 00:27:13.437 read: IOPS=24, BW=24.4MiB/s (25.6MB/s)(247MiB/10113msec) 00:27:13.437 slat (usec): min=41, max=2020.1k, avg=40624.27, stdev=203625.10 00:27:13.437 clat (msec): min=77, max=6552, avg=4050.61, stdev=2303.34 00:27:13.437 lat (msec): min=146, max=6555, avg=4091.23, stdev=2291.36 00:27:13.437 clat percentiles (msec): 00:27:13.437 | 1.00th=[ 150], 5.00th=[ 359], 10.00th=[ 684], 20.00th=[ 1418], 00:27:13.437 | 30.00th=[ 1871], 40.00th=[ 3608], 50.00th=[ 5604], 60.00th=[ 5873], 00:27:13.437 | 70.00th=[ 6074], 80.00th=[ 6275], 90.00th=[ 6409], 95.00th=[ 6477], 00:27:13.437 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:27:13.437 | 99.99th=[ 6544] 00:27:13.437 bw ( KiB/s): min= 2048, max=86016, per=0.90%, avg=30464.00, stdev=28618.40, samples=8 00:27:13.437 iops : min= 2, max= 84, avg=29.75, stdev=27.95, samples=8 00:27:13.437 lat (msec) : 100=0.40%, 250=2.83%, 500=3.64%, 750=4.05%, 1000=4.45% 00:27:13.437 lat (msec) : 2000=19.03%, >=2000=65.59% 00:27:13.437 cpu : usr=0.03%, sys=0.95%, ctx=849, majf=0, minf=32769 00:27:13.437 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.2%, 16=6.5%, 32=13.0%, >=64=74.5% 00:27:13.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.437 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:27:13.437 issued rwts: total=247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.437 job3: (groupid=0, jobs=1): err= 0: pid=3423380: Sun Nov 17 16:03:57 2024 00:27:13.437 read: IOPS=8, BW=9043KiB/s (9260kB/s)(89.0MiB/10078msec) 00:27:13.437 slat (usec): min=1555, max=3952.9k, avg=112362.88, stdev=509132.77 00:27:13.437 clat (msec): min=77, max=10073, avg=4151.74, stdev=4219.84 00:27:13.437 lat (msec): min=85, max=10077, avg=4264.10, stdev=4243.18 00:27:13.437 clat percentiles (msec): 00:27:13.437 | 1.00th=[ 78], 5.00th=[ 209], 10.00th=[ 313], 20.00th=[ 518], 00:27:13.437 | 30.00th=[ 760], 40.00th=[ 1020], 50.00th=[ 1368], 60.00th=[ 3574], 00:27:13.437 | 70.00th=[ 9731], 80.00th=[10000], 90.00th=[10000], 95.00th=[10000], 00:27:13.437 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:27:13.437 | 99.99th=[10134] 00:27:13.438 lat (msec) : 100=2.25%, 250=6.74%, 500=10.11%, 750=8.99%, 1000=10.11% 00:27:13.438 lat (msec) : 2000=19.10%, >=2000=42.70% 00:27:13.438 cpu : usr=0.00%, sys=0.67%, ctx=321, majf=0, minf=22785 00:27:13.438 IO depths : 1=1.1%, 2=2.2%, 4=4.5%, 8=9.0%, 16=18.0%, 32=36.0%, >=64=29.2% 00:27:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.438 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:13.438 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.438 job3: (groupid=0, jobs=1): err= 0: pid=3423381: Sun Nov 17 16:03:57 2024 00:27:13.438 read: IOPS=34, BW=34.0MiB/s (35.7MB/s)(365MiB/10726msec) 00:27:13.438 slat (usec): min=59, max=2071.4k, avg=29160.39, stdev=210954.10 00:27:13.438 clat (msec): min=80, max=9079, avg=3571.90, stdev=3304.60 00:27:13.438 lat (msec): min=837, max=9079, avg=3601.06, stdev=3309.45 00:27:13.438 clat percentiles (msec): 00:27:13.438 | 1.00th=[ 835], 5.00th=[ 844], 10.00th=[ 844], 20.00th=[ 852], 00:27:13.438 | 30.00th=[ 860], 40.00th=[ 869], 50.00th=[ 885], 60.00th=[ 2903], 00:27:13.438 | 70.00th=[ 6477], 80.00th=[ 8658], 90.00th=[ 8792], 95.00th=[ 8926], 00:27:13.438 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:27:13.438 | 99.99th=[ 9060] 00:27:13.438 bw ( KiB/s): min= 1519, max=155648, per=1.80%, avg=60861.87, stdev=58780.56, samples=8 00:27:13.438 iops : min= 1, max= 152, avg=59.37, stdev=57.47, samples=8 00:27:13.438 lat (msec) : 100=0.27%, 1000=51.51%, >=2000=48.22% 00:27:13.438 cpu : usr=0.00%, sys=1.21%, ctx=329, majf=0, minf=32769 00:27:13.438 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.4%, 32=8.8%, >=64=82.7% 00:27:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.438 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:27:13.438 issued rwts: total=365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.438 job3: (groupid=0, jobs=1): err= 0: pid=3423382: Sun Nov 17 16:03:57 2024 00:27:13.438 read: IOPS=25, BW=25.8MiB/s (27.1MB/s)(261MiB/10100msec) 00:27:13.438 slat (usec): min=432, max=2056.5k, avg=38394.65, stdev=210449.34 00:27:13.438 clat (msec): min=77, max=6340, avg=3714.03, stdev=2270.75 00:27:13.438 lat (msec): min=131, max=6347, avg=3752.43, stdev=2259.88 00:27:13.438 clat percentiles (msec): 00:27:13.438 | 1.00th=[ 153], 5.00th=[ 468], 10.00th=[ 978], 20.00th=[ 1318], 00:27:13.438 | 30.00th=[ 1502], 40.00th=[ 1703], 50.00th=[ 3742], 60.00th=[ 5671], 00:27:13.438 | 70.00th=[ 5940], 80.00th=[ 6074], 90.00th=[ 6208], 95.00th=[ 6208], 00:27:13.438 | 99.00th=[ 6275], 99.50th=[ 6275], 99.90th=[ 6342], 99.95th=[ 6342], 00:27:13.438 | 99.99th=[ 6342] 00:27:13.438 bw ( KiB/s): min= 8192, max=94208, per=1.34%, avg=45387.67, stdev=34939.82, samples=6 00:27:13.438 iops : min= 8, max= 92, avg=44.17, stdev=34.21, samples=6 00:27:13.438 lat (msec) : 100=0.38%, 250=1.53%, 500=3.45%, 750=2.68%, 1000=2.68% 00:27:13.438 lat (msec) : 2000=31.03%, >=2000=58.24% 00:27:13.438 cpu : usr=0.00%, sys=1.04%, ctx=777, majf=0, minf=32769 00:27:13.438 IO depths : 1=0.4%, 2=0.8%, 4=1.5%, 8=3.1%, 16=6.1%, 32=12.3%, >=64=75.9% 00:27:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.438 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:27:13.438 issued rwts: total=261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.438 job3: (groupid=0, jobs=1): err= 0: pid=3423383: Sun Nov 17 16:03:57 2024 00:27:13.438 read: IOPS=21, BW=21.9MiB/s (23.0MB/s)(220MiB/10033msec) 00:27:13.438 slat (usec): min=682, max=2088.3k, avg=45474.41, stdev=234649.01 00:27:13.438 clat (msec): min=26, max=6889, avg=4270.98, stdev=2623.57 00:27:13.438 lat (msec): min=35, max=6897, avg=4316.46, stdev=2607.13 00:27:13.438 clat percentiles (msec): 00:27:13.438 | 1.00th=[ 44], 5.00th=[ 203], 10.00th=[ 477], 20.00th=[ 1301], 00:27:13.438 | 30.00th=[ 1469], 40.00th=[ 3440], 50.00th=[ 6074], 60.00th=[ 6342], 00:27:13.438 | 70.00th=[ 6544], 80.00th=[ 6678], 90.00th=[ 6812], 95.00th=[ 6812], 00:27:13.438 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:27:13.438 | 99.99th=[ 6879] 00:27:13.438 bw ( KiB/s): min= 2043, max=53248, per=0.70%, avg=23714.62, stdev=18249.03, samples=8 00:27:13.438 iops : min= 1, max= 52, avg=23.00, stdev=17.94, samples=8 00:27:13.438 lat (msec) : 50=1.82%, 100=0.45%, 250=3.18%, 500=5.00%, 750=2.73% 00:27:13.438 lat (msec) : 1000=2.73%, 2000=20.45%, >=2000=63.64% 00:27:13.438 cpu : usr=0.03%, sys=0.99%, ctx=672, majf=0, minf=32769 00:27:13.438 IO depths : 1=0.5%, 2=0.9%, 4=1.8%, 8=3.6%, 16=7.3%, 32=14.5%, >=64=71.4% 00:27:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.438 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:27:13.438 issued rwts: total=220,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.438 job3: (groupid=0, jobs=1): err= 0: pid=3423384: Sun Nov 17 16:03:57 2024 00:27:13.438 read: IOPS=10, BW=10.7MiB/s (11.2MB/s)(108MiB/10133msec) 00:27:13.438 slat (usec): min=395, max=3808.0k, avg=92721.63, stdev=452386.88 00:27:13.438 clat (msec): min=118, max=10130, avg=5200.67, stdev=4216.86 00:27:13.438 lat (msec): min=133, max=10132, avg=5293.40, stdev=4214.17 00:27:13.438 clat percentiles (msec): 00:27:13.438 | 1.00th=[ 133], 5.00th=[ 224], 10.00th=[ 271], 20.00th=[ 693], 00:27:13.438 | 30.00th=[ 1003], 40.00th=[ 1469], 50.00th=[ 5671], 60.00th=[ 5873], 00:27:13.438 | 70.00th=[ 9866], 80.00th=[10000], 90.00th=[10134], 95.00th=[10134], 00:27:13.438 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:27:13.438 | 99.99th=[10134] 00:27:13.438 lat (msec) : 250=7.41%, 500=8.33%, 750=4.63%, 1000=9.26%, 2000=12.96% 00:27:13.438 lat (msec) : >=2000=57.41% 00:27:13.438 cpu : usr=0.00%, sys=0.88%, ctx=330, majf=0, minf=27649 00:27:13.438 IO depths : 1=0.9%, 2=1.9%, 4=3.7%, 8=7.4%, 16=14.8%, 32=29.6%, >=64=41.7% 00:27:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.438 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:13.438 issued rwts: total=108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.438 job3: (groupid=0, jobs=1): err= 0: pid=3423385: Sun Nov 17 16:03:57 2024 00:27:13.438 read: IOPS=33, BW=33.7MiB/s (35.3MB/s)(340MiB/10097msec) 00:27:13.438 slat (usec): min=44, max=3824.3k, avg=29455.36, stdev=234021.21 00:27:13.438 clat (msec): min=79, max=5418, avg=2505.74, stdev=1644.66 00:27:13.438 lat (msec): min=212, max=5427, avg=2535.19, stdev=1645.78 00:27:13.438 clat percentiles (msec): 00:27:13.438 | 1.00th=[ 253], 5.00th=[ 902], 10.00th=[ 969], 20.00th=[ 969], 00:27:13.438 | 30.00th=[ 978], 40.00th=[ 1070], 50.00th=[ 1469], 60.00th=[ 3473], 00:27:13.438 | 70.00th=[ 3809], 80.00th=[ 3977], 90.00th=[ 5134], 95.00th=[ 5269], 00:27:13.438 | 99.00th=[ 5336], 99.50th=[ 5403], 99.90th=[ 5403], 99.95th=[ 5403], 00:27:13.438 | 99.99th=[ 5403] 00:27:13.438 bw ( KiB/s): min=14336, max=131072, per=1.84%, avg=62019.86, stdev=50069.29, samples=7 00:27:13.438 iops : min= 14, max= 128, avg=60.43, stdev=49.04, samples=7 00:27:13.438 lat (msec) : 100=0.29%, 250=0.59%, 500=1.76%, 750=1.76%, 1000=32.65% 00:27:13.438 lat (msec) : 2000=14.12%, >=2000=48.82% 00:27:13.438 cpu : usr=0.04%, sys=1.37%, ctx=546, majf=0, minf=32769 00:27:13.438 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.4%, >=64=81.5% 00:27:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.438 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:13.438 issued rwts: total=340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.438 job3: (groupid=0, jobs=1): err= 0: pid=3423386: Sun Nov 17 16:03:57 2024 00:27:13.438 read: IOPS=75, BW=75.1MiB/s (78.7MB/s)(754MiB/10041msec) 00:27:13.438 slat (usec): min=41, max=1939.3k, avg=13262.78, stdev=100366.05 00:27:13.438 clat (msec): min=37, max=3672, avg=1316.98, stdev=951.34 00:27:13.438 lat (msec): min=41, max=5611, avg=1330.24, stdev=961.29 00:27:13.438 clat percentiles (msec): 00:27:13.438 | 1.00th=[ 89], 5.00th=[ 518], 10.00th=[ 523], 20.00th=[ 542], 00:27:13.438 | 30.00th=[ 558], 40.00th=[ 701], 50.00th=[ 1020], 60.00th=[ 1116], 00:27:13.438 | 70.00th=[ 1653], 80.00th=[ 1972], 90.00th=[ 3071], 95.00th=[ 3205], 00:27:13.438 | 99.00th=[ 3373], 99.50th=[ 3373], 99.90th=[ 3675], 99.95th=[ 3675], 00:27:13.438 | 99.99th=[ 3675] 00:27:13.438 bw ( KiB/s): min=28614, max=245760, per=3.26%, avg=110214.36, stdev=74288.76, samples=11 00:27:13.438 iops : min= 27, max= 240, avg=107.55, stdev=72.65, samples=11 00:27:13.438 lat (msec) : 50=0.53%, 100=1.06%, 250=1.46%, 500=1.46%, 750=37.00% 00:27:13.438 lat (msec) : 1000=8.09%, 2000=31.30%, >=2000=19.10% 00:27:13.438 cpu : usr=0.04%, sys=1.16%, ctx=968, majf=0, minf=32769 00:27:13.438 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.6% 00:27:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.438 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.438 issued rwts: total=754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.438 job4: (groupid=0, jobs=1): err= 0: pid=3423402: Sun Nov 17 16:03:57 2024 00:27:13.438 read: IOPS=10, BW=10.9MiB/s (11.4MB/s)(116MiB/10670msec) 00:27:13.438 slat (usec): min=931, max=2183.0k, avg=91287.08, stdev=378898.89 00:27:13.438 clat (msec): min=80, max=10668, avg=5862.01, stdev=2021.51 00:27:13.438 lat (msec): min=2098, max=10669, avg=5953.30, stdev=1997.10 00:27:13.438 clat percentiles (msec): 00:27:13.438 | 1.00th=[ 2106], 5.00th=[ 4329], 10.00th=[ 4530], 20.00th=[ 4732], 00:27:13.438 | 30.00th=[ 4866], 40.00th=[ 5067], 50.00th=[ 5269], 60.00th=[ 5671], 00:27:13.438 | 70.00th=[ 6007], 80.00th=[ 6208], 90.00th=[10671], 95.00th=[10671], 00:27:13.438 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:27:13.438 | 99.99th=[10671] 00:27:13.438 lat (msec) : 100=0.86%, >=2000=99.14% 00:27:13.438 cpu : usr=0.00%, sys=0.78%, ctx=503, majf=0, minf=29697 00:27:13.438 IO depths : 1=0.9%, 2=1.7%, 4=3.4%, 8=6.9%, 16=13.8%, 32=27.6%, >=64=45.7% 00:27:13.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.438 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:27:13.438 issued rwts: total=116,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.438 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.438 job4: (groupid=0, jobs=1): err= 0: pid=3423403: Sun Nov 17 16:03:57 2024 00:27:13.438 read: IOPS=28, BW=28.8MiB/s (30.2MB/s)(310MiB/10772msec) 00:27:13.439 slat (usec): min=541, max=2080.4k, avg=34589.59, stdev=228215.02 00:27:13.439 clat (msec): min=47, max=5337, avg=3345.70, stdev=1936.92 00:27:13.439 lat (msec): min=854, max=5362, avg=3380.29, stdev=1926.27 00:27:13.439 clat percentiles (msec): 00:27:13.439 | 1.00th=[ 852], 5.00th=[ 885], 10.00th=[ 894], 20.00th=[ 936], 00:27:13.439 | 30.00th=[ 978], 40.00th=[ 3171], 50.00th=[ 4597], 60.00th=[ 4933], 00:27:13.439 | 70.00th=[ 5067], 80.00th=[ 5134], 90.00th=[ 5201], 95.00th=[ 5269], 00:27:13.439 | 99.00th=[ 5336], 99.50th=[ 5336], 99.90th=[ 5336], 99.95th=[ 5336], 00:27:13.439 | 99.99th=[ 5336] 00:27:13.439 bw ( KiB/s): min= 2048, max=145408, per=1.84%, avg=62122.67, stdev=64551.54, samples=6 00:27:13.439 iops : min= 2, max= 142, avg=60.67, stdev=63.04, samples=6 00:27:13.439 lat (msec) : 50=0.32%, 1000=30.97%, 2000=5.16%, >=2000=63.55% 00:27:13.439 cpu : usr=0.05%, sys=1.01%, ctx=560, majf=0, minf=32769 00:27:13.439 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.3%, >=64=79.7% 00:27:13.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.439 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:13.439 issued rwts: total=310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.439 job4: (groupid=0, jobs=1): err= 0: pid=3423404: Sun Nov 17 16:03:57 2024 00:27:13.439 read: IOPS=115, BW=116MiB/s (121MB/s)(1166MiB/10071msec) 00:27:13.439 slat (usec): min=41, max=2068.5k, avg=8571.40, stdev=83778.27 00:27:13.439 clat (msec): min=67, max=6241, avg=745.46, stdev=1170.52 00:27:13.439 lat (msec): min=90, max=6246, avg=754.03, stdev=1181.37 00:27:13.439 clat percentiles (msec): 00:27:13.439 | 1.00th=[ 109], 5.00th=[ 321], 10.00th=[ 418], 20.00th=[ 422], 00:27:13.439 | 30.00th=[ 426], 40.00th=[ 430], 50.00th=[ 439], 60.00th=[ 447], 00:27:13.439 | 70.00th=[ 502], 80.00th=[ 527], 90.00th=[ 726], 95.00th=[ 1938], 00:27:13.439 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6275], 00:27:13.439 | 99.99th=[ 6275] 00:27:13.439 bw ( KiB/s): min=149504, max=309248, per=7.87%, avg=265849.00, stdev=52735.75, samples=8 00:27:13.439 iops : min= 146, max= 302, avg=259.50, stdev=51.50, samples=8 00:27:13.439 lat (msec) : 100=0.69%, 250=3.00%, 500=65.61%, 750=20.75%, 1000=1.03% 00:27:13.439 lat (msec) : 2000=4.12%, >=2000=4.80% 00:27:13.439 cpu : usr=0.09%, sys=2.49%, ctx=1345, majf=0, minf=32769 00:27:13.439 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=2.7%, >=64=94.6% 00:27:13.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.439 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.439 issued rwts: total=1166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.439 job4: (groupid=0, jobs=1): err= 0: pid=3423405: Sun Nov 17 16:03:57 2024 00:27:13.439 read: IOPS=44, BW=44.1MiB/s (46.3MB/s)(468MiB/10607msec) 00:27:13.439 slat (usec): min=42, max=2093.5k, avg=21425.99, stdev=139634.64 00:27:13.439 clat (msec): min=576, max=5775, avg=1503.34, stdev=842.17 00:27:13.439 lat (msec): min=633, max=5807, avg=1524.77, stdev=865.29 00:27:13.439 clat percentiles (msec): 00:27:13.439 | 1.00th=[ 651], 5.00th=[ 852], 10.00th=[ 860], 20.00th=[ 911], 00:27:13.439 | 30.00th=[ 953], 40.00th=[ 986], 50.00th=[ 1028], 60.00th=[ 1200], 00:27:13.439 | 70.00th=[ 1620], 80.00th=[ 2467], 90.00th=[ 2970], 95.00th=[ 3104], 00:27:13.439 | 99.00th=[ 3205], 99.50th=[ 3675], 99.90th=[ 5805], 99.95th=[ 5805], 00:27:13.439 | 99.99th=[ 5805] 00:27:13.439 bw ( KiB/s): min=28672, max=155648, per=2.58%, avg=87040.00, stdev=54018.86, samples=8 00:27:13.439 iops : min= 28, max= 152, avg=85.00, stdev=52.75, samples=8 00:27:13.439 lat (msec) : 750=1.71%, 1000=42.95%, 2000=30.77%, >=2000=24.57% 00:27:13.439 cpu : usr=0.02%, sys=1.07%, ctx=746, majf=0, minf=32769 00:27:13.439 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.5% 00:27:13.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.439 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.439 issued rwts: total=468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.439 job4: (groupid=0, jobs=1): err= 0: pid=3423406: Sun Nov 17 16:03:57 2024 00:27:13.439 read: IOPS=43, BW=43.0MiB/s (45.1MB/s)(456MiB/10597msec) 00:27:13.439 slat (usec): min=35, max=2210.4k, avg=21977.27, stdev=140639.63 00:27:13.439 clat (msec): min=571, max=5229, avg=2776.71, stdev=1480.45 00:27:13.439 lat (msec): min=631, max=5231, avg=2798.68, stdev=1482.72 00:27:13.439 clat percentiles (msec): 00:27:13.439 | 1.00th=[ 642], 5.00th=[ 953], 10.00th=[ 1116], 20.00th=[ 1150], 00:27:13.439 | 30.00th=[ 1167], 40.00th=[ 1871], 50.00th=[ 3071], 60.00th=[ 3406], 00:27:13.439 | 70.00th=[ 3775], 80.00th=[ 4463], 90.00th=[ 4665], 95.00th=[ 4866], 00:27:13.439 | 99.00th=[ 5201], 99.50th=[ 5201], 99.90th=[ 5201], 99.95th=[ 5201], 00:27:13.439 | 99.99th=[ 5201] 00:27:13.439 bw ( KiB/s): min= 2052, max=133120, per=1.63%, avg=55005.25, stdev=31120.64, samples=12 00:27:13.439 iops : min= 2, max= 130, avg=53.67, stdev=30.40, samples=12 00:27:13.439 lat (msec) : 750=2.41%, 1000=3.07%, 2000=36.84%, >=2000=57.68% 00:27:13.439 cpu : usr=0.01%, sys=1.38%, ctx=980, majf=0, minf=32769 00:27:13.439 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.0%, >=64=86.2% 00:27:13.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.439 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.439 issued rwts: total=456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.439 job4: (groupid=0, jobs=1): err= 0: pid=3423407: Sun Nov 17 16:03:57 2024 00:27:13.439 read: IOPS=31, BW=31.1MiB/s (32.6MB/s)(313MiB/10071msec) 00:27:13.439 slat (usec): min=37, max=2123.7k, avg=32036.59, stdev=167678.09 00:27:13.439 clat (msec): min=41, max=6323, avg=3631.38, stdev=1440.56 00:27:13.439 lat (msec): min=97, max=6338, avg=3663.42, stdev=1440.34 00:27:13.439 clat percentiles (msec): 00:27:13.439 | 1.00th=[ 105], 5.00th=[ 422], 10.00th=[ 860], 20.00th=[ 1552], 00:27:13.439 | 30.00th=[ 4010], 40.00th=[ 4245], 50.00th=[ 4329], 60.00th=[ 4396], 00:27:13.439 | 70.00th=[ 4463], 80.00th=[ 4463], 90.00th=[ 4530], 95.00th=[ 4597], 00:27:13.439 | 99.00th=[ 4665], 99.50th=[ 6275], 99.90th=[ 6342], 99.95th=[ 6342], 00:27:13.439 | 99.99th=[ 6342] 00:27:13.439 bw ( KiB/s): min= 4087, max=69632, per=1.25%, avg=42096.78, stdev=23174.83, samples=9 00:27:13.439 iops : min= 3, max= 68, avg=41.00, stdev=22.84, samples=9 00:27:13.439 lat (msec) : 50=0.32%, 100=0.32%, 250=2.56%, 500=1.92%, 750=3.83% 00:27:13.439 lat (msec) : 1000=2.24%, 2000=9.90%, >=2000=78.91% 00:27:13.439 cpu : usr=0.01%, sys=1.20%, ctx=938, majf=0, minf=32769 00:27:13.439 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.2%, >=64=79.9% 00:27:13.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.439 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:27:13.439 issued rwts: total=313,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.439 job4: (groupid=0, jobs=1): err= 0: pid=3423409: Sun Nov 17 16:03:57 2024 00:27:13.439 read: IOPS=15, BW=15.4MiB/s (16.1MB/s)(165MiB/10747msec) 00:27:13.439 slat (usec): min=430, max=2201.0k, avg=64640.57, stdev=301962.63 00:27:13.439 clat (msec): min=79, max=10228, avg=6084.10, stdev=1884.94 00:27:13.439 lat (msec): min=2101, max=10233, avg=6148.74, stdev=1853.26 00:27:13.439 clat percentiles (msec): 00:27:13.439 | 1.00th=[ 2106], 5.00th=[ 4463], 10.00th=[ 4597], 20.00th=[ 5000], 00:27:13.439 | 30.00th=[ 5269], 40.00th=[ 5470], 50.00th=[ 5671], 60.00th=[ 5738], 00:27:13.439 | 70.00th=[ 5873], 80.00th=[ 6275], 90.00th=[10134], 95.00th=[10268], 00:27:13.439 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:27:13.439 | 99.99th=[10268] 00:27:13.439 bw ( KiB/s): min= 1444, max=38912, per=0.57%, avg=19305.00, stdev=18320.88, samples=4 00:27:13.439 iops : min= 1, max= 38, avg=18.75, stdev=18.03, samples=4 00:27:13.439 lat (msec) : 100=0.61%, >=2000=99.39% 00:27:13.439 cpu : usr=0.01%, sys=0.93%, ctx=597, majf=0, minf=32769 00:27:13.439 IO depths : 1=0.6%, 2=1.2%, 4=2.4%, 8=4.8%, 16=9.7%, 32=19.4%, >=64=61.8% 00:27:13.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.439 complete : 0=0.0%, 4=97.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.6% 00:27:13.439 issued rwts: total=165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.439 job4: (groupid=0, jobs=1): err= 0: pid=3423410: Sun Nov 17 16:03:57 2024 00:27:13.439 read: IOPS=81, BW=82.0MiB/s (86.0MB/s)(829MiB/10113msec) 00:27:13.439 slat (usec): min=43, max=2056.1k, avg=12102.76, stdev=74094.19 00:27:13.439 clat (msec): min=74, max=4141, avg=1427.70, stdev=1002.49 00:27:13.439 lat (msec): min=142, max=4142, avg=1439.80, stdev=1006.60 00:27:13.439 clat percentiles (msec): 00:27:13.439 | 1.00th=[ 157], 5.00th=[ 368], 10.00th=[ 667], 20.00th=[ 869], 00:27:13.439 | 30.00th=[ 936], 40.00th=[ 995], 50.00th=[ 1083], 60.00th=[ 1167], 00:27:13.439 | 70.00th=[ 1284], 80.00th=[ 1603], 90.00th=[ 3473], 95.00th=[ 3809], 00:27:13.439 | 99.00th=[ 4111], 99.50th=[ 4111], 99.90th=[ 4144], 99.95th=[ 4144], 00:27:13.439 | 99.99th=[ 4144] 00:27:13.439 bw ( KiB/s): min= 2048, max=157696, per=2.83%, avg=95695.40, stdev=48875.77, samples=15 00:27:13.439 iops : min= 2, max= 154, avg=93.40, stdev=47.72, samples=15 00:27:13.439 lat (msec) : 100=0.12%, 250=2.17%, 500=5.31%, 750=3.62%, 1000=28.95% 00:27:13.439 lat (msec) : 2000=44.51%, >=2000=15.32% 00:27:13.439 cpu : usr=0.07%, sys=1.52%, ctx=1131, majf=0, minf=32769 00:27:13.439 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:27:13.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.439 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.439 issued rwts: total=829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.439 job4: (groupid=0, jobs=1): err= 0: pid=3423411: Sun Nov 17 16:03:57 2024 00:27:13.439 read: IOPS=47, BW=47.9MiB/s (50.2MB/s)(514MiB/10731msec) 00:27:13.439 slat (usec): min=42, max=2125.7k, avg=20714.97, stdev=152194.18 00:27:13.439 clat (msec): min=80, max=8515, avg=2524.99, stdev=2453.75 00:27:13.439 lat (msec): min=572, max=8552, avg=2545.71, stdev=2463.99 00:27:13.439 clat percentiles (msec): 00:27:13.440 | 1.00th=[ 575], 5.00th=[ 575], 10.00th=[ 575], 20.00th=[ 575], 00:27:13.440 | 30.00th=[ 584], 40.00th=[ 667], 50.00th=[ 802], 60.00th=[ 2869], 00:27:13.440 | 70.00th=[ 3809], 80.00th=[ 4933], 90.00th=[ 6007], 95.00th=[ 8356], 00:27:13.440 | 99.00th=[ 8423], 99.50th=[ 8490], 99.90th=[ 8490], 99.95th=[ 8490], 00:27:13.440 | 99.99th=[ 8490] 00:27:13.440 bw ( KiB/s): min= 1484, max=227328, per=2.13%, avg=71987.55, stdev=81513.51, samples=11 00:27:13.440 iops : min= 1, max= 222, avg=70.00, stdev=79.81, samples=11 00:27:13.440 lat (msec) : 100=0.19%, 750=48.05%, 1000=8.75%, >=2000=43.00% 00:27:13.440 cpu : usr=0.02%, sys=1.28%, ctx=923, majf=0, minf=32769 00:27:13.440 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.2%, >=64=87.7% 00:27:13.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.440 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.440 issued rwts: total=514,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.440 job4: (groupid=0, jobs=1): err= 0: pid=3423412: Sun Nov 17 16:03:57 2024 00:27:13.440 read: IOPS=48, BW=48.0MiB/s (50.3MB/s)(508MiB/10583msec) 00:27:13.440 slat (usec): min=41, max=2051.2k, avg=20825.40, stdev=116074.14 00:27:13.440 clat (usec): min=982, max=4569.3k, avg=2384397.29, stdev=1213630.51 00:27:13.440 lat (msec): min=1062, max=4584, avg=2405.22, stdev=1214.09 00:27:13.440 clat percentiles (msec): 00:27:13.440 | 1.00th=[ 1070], 5.00th=[ 1099], 10.00th=[ 1099], 20.00th=[ 1234], 00:27:13.440 | 30.00th=[ 1586], 40.00th=[ 1737], 50.00th=[ 1838], 60.00th=[ 2056], 00:27:13.440 | 70.00th=[ 2937], 80.00th=[ 4111], 90.00th=[ 4329], 95.00th=[ 4530], 00:27:13.440 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4597], 99.95th=[ 4597], 00:27:13.440 | 99.99th=[ 4597] 00:27:13.440 bw ( KiB/s): min= 2048, max=126976, per=2.10%, avg=70749.09, stdev=39832.03, samples=11 00:27:13.440 iops : min= 2, max= 124, avg=69.09, stdev=38.90, samples=11 00:27:13.440 lat (usec) : 1000=0.20% 00:27:13.440 lat (msec) : 2000=58.66%, >=2000=41.14% 00:27:13.440 cpu : usr=0.00%, sys=1.25%, ctx=1008, majf=0, minf=32769 00:27:13.440 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.1%, 32=6.3%, >=64=87.6% 00:27:13.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.440 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.440 issued rwts: total=508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.440 job4: (groupid=0, jobs=1): err= 0: pid=3423413: Sun Nov 17 16:03:57 2024 00:27:13.440 read: IOPS=35, BW=35.6MiB/s (37.4MB/s)(384MiB/10777msec) 00:27:13.440 slat (usec): min=31, max=2183.8k, avg=27848.50, stdev=156504.28 00:27:13.440 clat (msec): min=80, max=8375, avg=3397.80, stdev=2090.75 00:27:13.440 lat (msec): min=813, max=8455, avg=3425.65, stdev=2099.02 00:27:13.440 clat percentiles (msec): 00:27:13.440 | 1.00th=[ 835], 5.00th=[ 1116], 10.00th=[ 1116], 20.00th=[ 1133], 00:27:13.440 | 30.00th=[ 1200], 40.00th=[ 1703], 50.00th=[ 3842], 60.00th=[ 4279], 00:27:13.440 | 70.00th=[ 4597], 80.00th=[ 5201], 90.00th=[ 6275], 95.00th=[ 7013], 00:27:13.440 | 99.00th=[ 7617], 99.50th=[ 7617], 99.90th=[ 8356], 99.95th=[ 8356], 00:27:13.440 | 99.99th=[ 8356] 00:27:13.440 bw ( KiB/s): min= 1372, max=110371, per=1.30%, avg=43833.92, stdev=33762.31, samples=12 00:27:13.440 iops : min= 1, max= 107, avg=42.67, stdev=32.84, samples=12 00:27:13.440 lat (msec) : 100=0.26%, 1000=1.56%, 2000=40.36%, >=2000=57.81% 00:27:13.440 cpu : usr=0.02%, sys=1.48%, ctx=994, majf=0, minf=32769 00:27:13.440 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.1%, 16=4.2%, 32=8.3%, >=64=83.6% 00:27:13.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.440 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:27:13.440 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.440 job4: (groupid=0, jobs=1): err= 0: pid=3423414: Sun Nov 17 16:03:57 2024 00:27:13.440 read: IOPS=12, BW=12.8MiB/s (13.4MB/s)(136MiB/10664msec) 00:27:13.440 slat (usec): min=404, max=2148.4k, avg=77815.06, stdev=337833.82 00:27:13.440 clat (msec): min=79, max=8661, avg=4834.33, stdev=1479.83 00:27:13.440 lat (msec): min=1828, max=8715, avg=4912.15, stdev=1465.34 00:27:13.440 clat percentiles (msec): 00:27:13.440 | 1.00th=[ 1838], 5.00th=[ 1905], 10.00th=[ 1989], 20.00th=[ 4396], 00:27:13.440 | 30.00th=[ 4597], 40.00th=[ 4799], 50.00th=[ 5067], 60.00th=[ 5336], 00:27:13.440 | 70.00th=[ 5873], 80.00th=[ 6141], 90.00th=[ 6208], 95.00th=[ 6342], 00:27:13.440 | 99.00th=[ 6611], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:27:13.440 | 99.99th=[ 8658] 00:27:13.440 bw ( KiB/s): min= 2052, max=16384, per=0.27%, avg=9218.00, stdev=10134.25, samples=2 00:27:13.440 iops : min= 2, max= 16, avg= 9.00, stdev= 9.90, samples=2 00:27:13.440 lat (msec) : 100=0.74%, 2000=9.56%, >=2000=89.71% 00:27:13.440 cpu : usr=0.00%, sys=0.73%, ctx=470, majf=0, minf=32769 00:27:13.440 IO depths : 1=0.7%, 2=1.5%, 4=2.9%, 8=5.9%, 16=11.8%, 32=23.5%, >=64=53.7% 00:27:13.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.440 complete : 0=0.0%, 4=90.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=10.0% 00:27:13.440 issued rwts: total=136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.440 job4: (groupid=0, jobs=1): err= 0: pid=3423415: Sun Nov 17 16:03:57 2024 00:27:13.440 read: IOPS=25, BW=25.5MiB/s (26.7MB/s)(272MiB/10681msec) 00:27:13.440 slat (usec): min=124, max=2156.8k, avg=38959.33, stdev=227999.73 00:27:13.440 clat (msec): min=82, max=7975, avg=4025.83, stdev=2033.50 00:27:13.440 lat (msec): min=489, max=7979, avg=4064.79, stdev=2022.18 00:27:13.440 clat percentiles (msec): 00:27:13.440 | 1.00th=[ 489], 5.00th=[ 542], 10.00th=[ 2022], 20.00th=[ 2265], 00:27:13.440 | 30.00th=[ 2567], 40.00th=[ 2970], 50.00th=[ 3507], 60.00th=[ 4799], 00:27:13.440 | 70.00th=[ 5470], 80.00th=[ 5940], 90.00th=[ 6342], 95.00th=[ 7953], 00:27:13.440 | 99.00th=[ 7953], 99.50th=[ 7953], 99.90th=[ 7953], 99.95th=[ 7953], 00:27:13.440 | 99.99th=[ 7953] 00:27:13.440 bw ( KiB/s): min= 2052, max=79872, per=1.10%, avg=37120.50, stdev=25960.92, samples=8 00:27:13.440 iops : min= 2, max= 78, avg=36.25, stdev=25.35, samples=8 00:27:13.440 lat (msec) : 100=0.37%, 500=0.74%, 750=6.25%, 2000=1.84%, >=2000=90.81% 00:27:13.440 cpu : usr=0.01%, sys=0.92%, ctx=643, majf=0, minf=32769 00:27:13.440 IO depths : 1=0.4%, 2=0.7%, 4=1.5%, 8=2.9%, 16=5.9%, 32=11.8%, >=64=76.8% 00:27:13.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.440 complete : 0=0.0%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.7% 00:27:13.440 issued rwts: total=272,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.440 job5: (groupid=0, jobs=1): err= 0: pid=3423424: Sun Nov 17 16:03:57 2024 00:27:13.440 read: IOPS=42, BW=42.2MiB/s (44.2MB/s)(423MiB/10026msec) 00:27:13.440 slat (usec): min=223, max=1943.6k, avg=23638.62, stdev=96792.76 00:27:13.440 clat (msec): min=23, max=3921, avg=2135.99, stdev=969.29 00:27:13.440 lat (msec): min=25, max=3951, avg=2159.63, stdev=972.23 00:27:13.440 clat percentiles (msec): 00:27:13.440 | 1.00th=[ 70], 5.00th=[ 718], 10.00th=[ 1011], 20.00th=[ 1284], 00:27:13.440 | 30.00th=[ 1418], 40.00th=[ 1720], 50.00th=[ 2022], 60.00th=[ 2366], 00:27:13.440 | 70.00th=[ 2937], 80.00th=[ 3037], 90.00th=[ 3540], 95.00th=[ 3708], 00:27:13.440 | 99.00th=[ 3910], 99.50th=[ 3910], 99.90th=[ 3910], 99.95th=[ 3910], 00:27:13.440 | 99.99th=[ 3910] 00:27:13.440 bw ( KiB/s): min=16384, max=108544, per=1.41%, avg=47607.75, stdev=24654.84, samples=12 00:27:13.440 iops : min= 16, max= 106, avg=46.42, stdev=24.07, samples=12 00:27:13.440 lat (msec) : 50=0.95%, 100=0.47%, 250=0.95%, 500=1.65%, 750=2.13% 00:27:13.440 lat (msec) : 1000=3.07%, 2000=40.66%, >=2000=50.12% 00:27:13.440 cpu : usr=0.02%, sys=1.10%, ctx=1723, majf=0, minf=32769 00:27:13.440 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.1% 00:27:13.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.440 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.440 issued rwts: total=423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.440 job5: (groupid=0, jobs=1): err= 0: pid=3423425: Sun Nov 17 16:03:57 2024 00:27:13.440 read: IOPS=53, BW=53.2MiB/s (55.8MB/s)(535MiB/10050msec) 00:27:13.440 slat (usec): min=463, max=141574, avg=18687.14, stdev=21859.50 00:27:13.440 clat (msec): min=49, max=3799, avg=1906.05, stdev=652.18 00:27:13.440 lat (msec): min=53, max=3802, avg=1924.74, stdev=654.85 00:27:13.440 clat percentiles (msec): 00:27:13.440 | 1.00th=[ 123], 5.00th=[ 584], 10.00th=[ 961], 20.00th=[ 1670], 00:27:13.440 | 30.00th=[ 1821], 40.00th=[ 1888], 50.00th=[ 1921], 60.00th=[ 2022], 00:27:13.440 | 70.00th=[ 2106], 80.00th=[ 2198], 90.00th=[ 2299], 95.00th=[ 3272], 00:27:13.440 | 99.00th=[ 3742], 99.50th=[ 3775], 99.90th=[ 3809], 99.95th=[ 3809], 00:27:13.440 | 99.99th=[ 3809] 00:27:13.440 bw ( KiB/s): min=30720, max=96256, per=1.90%, avg=64263.69, stdev=17237.24, samples=13 00:27:13.440 iops : min= 30, max= 94, avg=62.69, stdev=16.78, samples=13 00:27:13.440 lat (msec) : 50=0.19%, 100=0.37%, 250=1.31%, 500=2.06%, 750=3.36% 00:27:13.440 lat (msec) : 1000=3.36%, 2000=46.54%, >=2000=42.80% 00:27:13.440 cpu : usr=0.00%, sys=1.52%, ctx=2225, majf=0, minf=32769 00:27:13.440 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=6.0%, >=64=88.2% 00:27:13.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.440 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.440 issued rwts: total=535,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.440 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.440 job5: (groupid=0, jobs=1): err= 0: pid=3423426: Sun Nov 17 16:03:57 2024 00:27:13.440 read: IOPS=37, BW=37.1MiB/s (38.9MB/s)(372MiB/10031msec) 00:27:13.440 slat (usec): min=415, max=1943.6k, avg=26900.12, stdev=102789.17 00:27:13.440 clat (msec): min=20, max=3810, avg=2408.68, stdev=991.70 00:27:13.440 lat (msec): min=33, max=3866, avg=2435.58, stdev=995.06 00:27:13.440 clat percentiles (msec): 00:27:13.440 | 1.00th=[ 81], 5.00th=[ 542], 10.00th=[ 802], 20.00th=[ 1737], 00:27:13.440 | 30.00th=[ 1972], 40.00th=[ 2232], 50.00th=[ 2567], 60.00th=[ 2802], 00:27:13.440 | 70.00th=[ 3171], 80.00th=[ 3373], 90.00th=[ 3540], 95.00th=[ 3708], 00:27:13.440 | 99.00th=[ 3775], 99.50th=[ 3809], 99.90th=[ 3809], 99.95th=[ 3809], 00:27:13.441 | 99.99th=[ 3809] 00:27:13.441 bw ( KiB/s): min=18432, max=65536, per=1.15%, avg=38741.33, stdev=13199.34, samples=12 00:27:13.441 iops : min= 18, max= 64, avg=37.83, stdev=12.89, samples=12 00:27:13.441 lat (msec) : 50=0.54%, 100=1.34%, 250=1.08%, 500=1.88%, 750=3.76% 00:27:13.441 lat (msec) : 1000=4.57%, 2000=18.55%, >=2000=68.28% 00:27:13.441 cpu : usr=0.03%, sys=1.07%, ctx=1746, majf=0, minf=32769 00:27:13.441 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.2%, 16=4.3%, 32=8.6%, >=64=83.1% 00:27:13.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.441 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:27:13.441 issued rwts: total=372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.441 job5: (groupid=0, jobs=1): err= 0: pid=3423427: Sun Nov 17 16:03:57 2024 00:27:13.441 read: IOPS=60, BW=60.4MiB/s (63.3MB/s)(607MiB/10054msec) 00:27:13.441 slat (usec): min=41, max=2061.0k, avg=16472.10, stdev=109288.91 00:27:13.441 clat (msec): min=51, max=6932, avg=1884.08, stdev=2185.39 00:27:13.441 lat (msec): min=53, max=6934, avg=1900.55, stdev=2195.10 00:27:13.441 clat percentiles (msec): 00:27:13.441 | 1.00th=[ 61], 5.00th=[ 171], 10.00th=[ 279], 20.00th=[ 422], 00:27:13.441 | 30.00th=[ 426], 40.00th=[ 456], 50.00th=[ 493], 60.00th=[ 542], 00:27:13.441 | 70.00th=[ 2534], 80.00th=[ 4329], 90.00th=[ 5873], 95.00th=[ 6678], 00:27:13.441 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946], 00:27:13.441 | 99.99th=[ 6946] 00:27:13.441 bw ( KiB/s): min= 4096, max=284672, per=2.65%, avg=89312.45, stdev=109932.51, samples=11 00:27:13.441 iops : min= 4, max= 278, avg=87.00, stdev=107.39, samples=11 00:27:13.441 lat (msec) : 100=2.31%, 250=5.27%, 500=44.65%, 750=10.38%, 2000=1.48% 00:27:13.441 lat (msec) : >=2000=35.91% 00:27:13.441 cpu : usr=0.01%, sys=1.55%, ctx=1163, majf=0, minf=32769 00:27:13.441 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.6% 00:27:13.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.441 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.441 issued rwts: total=607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.441 job5: (groupid=0, jobs=1): err= 0: pid=3423428: Sun Nov 17 16:03:57 2024 00:27:13.441 read: IOPS=45, BW=45.1MiB/s (47.3MB/s)(452MiB/10018msec) 00:27:13.441 slat (usec): min=481, max=1943.6k, avg=22122.17, stdev=92237.75 00:27:13.441 clat (msec): min=15, max=3803, avg=1994.32, stdev=716.54 00:27:13.441 lat (msec): min=53, max=3821, avg=2016.45, stdev=721.16 00:27:13.441 clat percentiles (msec): 00:27:13.441 | 1.00th=[ 125], 5.00th=[ 584], 10.00th=[ 1003], 20.00th=[ 1636], 00:27:13.441 | 30.00th=[ 1670], 40.00th=[ 1754], 50.00th=[ 2005], 60.00th=[ 2140], 00:27:13.441 | 70.00th=[ 2500], 80.00th=[ 2735], 90.00th=[ 2802], 95.00th=[ 2869], 00:27:13.441 | 99.00th=[ 3775], 99.50th=[ 3775], 99.90th=[ 3809], 99.95th=[ 3809], 00:27:13.441 | 99.99th=[ 3809] 00:27:13.441 bw ( KiB/s): min=32768, max=71680, per=1.55%, avg=52380.25, stdev=12126.52, samples=12 00:27:13.441 iops : min= 32, max= 70, avg=51.00, stdev=11.98, samples=12 00:27:13.441 lat (msec) : 20=0.22%, 100=0.44%, 250=1.33%, 500=1.99%, 750=3.76% 00:27:13.441 lat (msec) : 1000=2.21%, 2000=39.60%, >=2000=50.44% 00:27:13.441 cpu : usr=0.02%, sys=1.24%, ctx=1968, majf=0, minf=32769 00:27:13.441 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.5%, 32=7.1%, >=64=86.1% 00:27:13.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.441 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.441 issued rwts: total=452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.441 job5: (groupid=0, jobs=1): err= 0: pid=3423429: Sun Nov 17 16:03:57 2024 00:27:13.441 read: IOPS=135, BW=136MiB/s (142MB/s)(1367MiB/10075msec) 00:27:13.441 slat (usec): min=39, max=1937.4k, avg=7308.24, stdev=54104.35 00:27:13.441 clat (msec): min=73, max=2589, avg=744.73, stdev=512.34 00:27:13.441 lat (msec): min=74, max=2592, avg=752.04, stdev=516.51 00:27:13.441 clat percentiles (msec): 00:27:13.441 | 1.00th=[ 264], 5.00th=[ 266], 10.00th=[ 268], 20.00th=[ 288], 00:27:13.441 | 30.00th=[ 418], 40.00th=[ 426], 50.00th=[ 625], 60.00th=[ 860], 00:27:13.441 | 70.00th=[ 919], 80.00th=[ 978], 90.00th=[ 1418], 95.00th=[ 2005], 00:27:13.441 | 99.00th=[ 2567], 99.50th=[ 2567], 99.90th=[ 2601], 99.95th=[ 2601], 00:27:13.441 | 99.99th=[ 2601] 00:27:13.441 bw ( KiB/s): min=24576, max=487424, per=5.01%, avg=169235.27, stdev=124859.09, samples=15 00:27:13.441 iops : min= 24, max= 476, avg=165.13, stdev=121.96, samples=15 00:27:13.441 lat (msec) : 100=0.15%, 250=0.22%, 500=46.96%, 750=4.83%, 1000=31.60% 00:27:13.441 lat (msec) : 2000=11.19%, >=2000=5.05% 00:27:13.441 cpu : usr=0.06%, sys=2.53%, ctx=1374, majf=0, minf=32769 00:27:13.441 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:27:13.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.441 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.441 issued rwts: total=1367,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.441 job5: (groupid=0, jobs=1): err= 0: pid=3423430: Sun Nov 17 16:03:57 2024 00:27:13.441 read: IOPS=65, BW=65.1MiB/s (68.2MB/s)(653MiB/10038msec) 00:27:13.441 slat (usec): min=39, max=938440, avg=15335.54, stdev=45763.64 00:27:13.441 clat (msec): min=20, max=4300, avg=1779.92, stdev=1177.17 00:27:13.441 lat (msec): min=45, max=4307, avg=1795.26, stdev=1182.41 00:27:13.441 clat percentiles (msec): 00:27:13.441 | 1.00th=[ 49], 5.00th=[ 300], 10.00th=[ 435], 20.00th=[ 550], 00:27:13.441 | 30.00th=[ 995], 40.00th=[ 1301], 50.00th=[ 1636], 60.00th=[ 1787], 00:27:13.441 | 70.00th=[ 2165], 80.00th=[ 3071], 90.00th=[ 3675], 95.00th=[ 4077], 00:27:13.441 | 99.00th=[ 4212], 99.50th=[ 4245], 99.90th=[ 4329], 99.95th=[ 4329], 00:27:13.441 | 99.99th=[ 4329] 00:27:13.441 bw ( KiB/s): min= 4096, max=196608, per=1.93%, avg=65118.93, stdev=58082.13, samples=15 00:27:13.441 iops : min= 4, max= 192, avg=63.47, stdev=56.81, samples=15 00:27:13.441 lat (msec) : 50=1.23%, 100=1.38%, 250=1.99%, 500=12.10%, 750=6.43% 00:27:13.441 lat (msec) : 1000=7.96%, 2000=36.14%, >=2000=32.77% 00:27:13.441 cpu : usr=0.00%, sys=1.32%, ctx=1787, majf=0, minf=32769 00:27:13.441 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.4% 00:27:13.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.441 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.441 issued rwts: total=653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.441 job5: (groupid=0, jobs=1): err= 0: pid=3423431: Sun Nov 17 16:03:57 2024 00:27:13.441 read: IOPS=45, BW=45.7MiB/s (47.9MB/s)(459MiB/10038msec) 00:27:13.441 slat (usec): min=52, max=1827.8k, avg=21817.83, stdev=89890.29 00:27:13.441 clat (msec): min=20, max=3944, avg=2022.85, stdev=689.52 00:27:13.441 lat (msec): min=59, max=3950, avg=2044.66, stdev=693.31 00:27:13.441 clat percentiles (msec): 00:27:13.441 | 1.00th=[ 84], 5.00th=[ 617], 10.00th=[ 1217], 20.00th=[ 1569], 00:27:13.441 | 30.00th=[ 1905], 40.00th=[ 2005], 50.00th=[ 2089], 60.00th=[ 2198], 00:27:13.441 | 70.00th=[ 2333], 80.00th=[ 2433], 90.00th=[ 2534], 95.00th=[ 2769], 00:27:13.441 | 99.00th=[ 3943], 99.50th=[ 3943], 99.90th=[ 3943], 99.95th=[ 3943], 00:27:13.441 | 99.99th=[ 3943] 00:27:13.441 bw ( KiB/s): min=18432, max=106283, per=1.58%, avg=53223.42, stdev=23179.38, samples=12 00:27:13.441 iops : min= 18, max= 103, avg=51.83, stdev=22.52, samples=12 00:27:13.441 lat (msec) : 50=0.22%, 100=1.31%, 250=0.87%, 500=1.96%, 750=1.74% 00:27:13.441 lat (msec) : 1000=1.74%, 2000=31.15%, >=2000=61.00% 00:27:13.441 cpu : usr=0.03%, sys=1.16%, ctx=1640, majf=0, minf=32769 00:27:13.441 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:27:13.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.441 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.441 issued rwts: total=459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.441 job5: (groupid=0, jobs=1): err= 0: pid=3423432: Sun Nov 17 16:03:57 2024 00:27:13.441 read: IOPS=70, BW=70.3MiB/s (73.7MB/s)(708MiB/10071msec) 00:27:13.441 slat (usec): min=53, max=2028.0k, avg=14123.64, stdev=78576.99 00:27:13.441 clat (msec): min=67, max=4944, avg=1722.94, stdev=1181.43 00:27:13.441 lat (msec): min=72, max=5668, avg=1737.06, stdev=1188.95 00:27:13.441 clat percentiles (msec): 00:27:13.441 | 1.00th=[ 155], 5.00th=[ 659], 10.00th=[ 835], 20.00th=[ 844], 00:27:13.441 | 30.00th=[ 911], 40.00th=[ 1099], 50.00th=[ 1183], 60.00th=[ 1318], 00:27:13.441 | 70.00th=[ 1905], 80.00th=[ 2735], 90.00th=[ 3641], 95.00th=[ 4245], 00:27:13.441 | 99.00th=[ 4732], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:27:13.441 | 99.99th=[ 4933] 00:27:13.441 bw ( KiB/s): min=10219, max=157696, per=2.20%, avg=74349.50, stdev=43013.43, samples=16 00:27:13.441 iops : min= 9, max= 154, avg=72.44, stdev=42.13, samples=16 00:27:13.441 lat (msec) : 100=0.28%, 250=1.41%, 500=1.98%, 750=1.98%, 1000=30.23% 00:27:13.441 lat (msec) : 2000=34.75%, >=2000=29.38% 00:27:13.441 cpu : usr=0.05%, sys=1.80%, ctx=1478, majf=0, minf=32769 00:27:13.441 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:27:13.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.442 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:27:13.442 issued rwts: total=708,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.442 job5: (groupid=0, jobs=1): err= 0: pid=3423433: Sun Nov 17 16:03:57 2024 00:27:13.442 read: IOPS=40, BW=40.1MiB/s (42.0MB/s)(404MiB/10075msec) 00:27:13.442 slat (usec): min=37, max=2071.8k, avg=24760.80, stdev=145255.25 00:27:13.442 clat (msec): min=69, max=5487, avg=2507.32, stdev=1605.72 00:27:13.442 lat (msec): min=136, max=5677, avg=2532.08, stdev=1608.52 00:27:13.442 clat percentiles (msec): 00:27:13.442 | 1.00th=[ 190], 5.00th=[ 489], 10.00th=[ 827], 20.00th=[ 953], 00:27:13.442 | 30.00th=[ 1167], 40.00th=[ 1334], 50.00th=[ 2232], 60.00th=[ 3205], 00:27:13.442 | 70.00th=[ 3373], 80.00th=[ 4077], 90.00th=[ 5134], 95.00th=[ 5336], 00:27:13.442 | 99.00th=[ 5470], 99.50th=[ 5470], 99.90th=[ 5470], 99.95th=[ 5470], 00:27:13.442 | 99.99th=[ 5470] 00:27:13.442 bw ( KiB/s): min= 8192, max=159744, per=1.53%, avg=51558.09, stdev=41933.02, samples=11 00:27:13.442 iops : min= 8, max= 156, avg=50.18, stdev=41.01, samples=11 00:27:13.442 lat (msec) : 100=0.25%, 250=2.23%, 500=2.72%, 750=2.97%, 1000=14.11% 00:27:13.442 lat (msec) : 2000=25.99%, >=2000=51.73% 00:27:13.442 cpu : usr=0.02%, sys=1.19%, ctx=1126, majf=0, minf=32769 00:27:13.442 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=7.9%, >=64=84.4% 00:27:13.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.442 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:27:13.442 issued rwts: total=404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.442 job5: (groupid=0, jobs=1): err= 0: pid=3423434: Sun Nov 17 16:03:57 2024 00:27:13.442 read: IOPS=136, BW=136MiB/s (143MB/s)(1372MiB/10052msec) 00:27:13.442 slat (usec): min=44, max=132403, avg=7291.74, stdev=14688.07 00:27:13.442 clat (msec): min=41, max=2273, avg=885.00, stdev=399.87 00:27:13.442 lat (msec): min=60, max=2275, avg=892.30, stdev=401.23 00:27:13.442 clat percentiles (msec): 00:27:13.442 | 1.00th=[ 292], 5.00th=[ 518], 10.00th=[ 531], 20.00th=[ 558], 00:27:13.442 | 30.00th=[ 592], 40.00th=[ 693], 50.00th=[ 768], 60.00th=[ 978], 00:27:13.442 | 70.00th=[ 1020], 80.00th=[ 1083], 90.00th=[ 1200], 95.00th=[ 1905], 00:27:13.442 | 99.00th=[ 2165], 99.50th=[ 2198], 99.90th=[ 2265], 99.95th=[ 2265], 00:27:13.442 | 99.99th=[ 2265] 00:27:13.442 bw ( KiB/s): min=26624, max=243225, per=4.36%, avg=147133.18, stdev=64378.05, samples=17 00:27:13.442 iops : min= 26, max= 237, avg=143.53, stdev=62.83, samples=17 00:27:13.442 lat (msec) : 50=0.07%, 100=0.22%, 250=0.51%, 500=0.80%, 750=47.89% 00:27:13.442 lat (msec) : 1000=18.37%, 2000=28.43%, >=2000=3.72% 00:27:13.442 cpu : usr=0.05%, sys=2.02%, ctx=1606, majf=0, minf=32769 00:27:13.442 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:27:13.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.442 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:13.442 issued rwts: total=1372,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.442 job5: (groupid=0, jobs=1): err= 0: pid=3423435: Sun Nov 17 16:03:57 2024 00:27:13.442 read: IOPS=39, BW=39.0MiB/s (40.9MB/s)(396MiB/10144msec) 00:27:13.442 slat (usec): min=31, max=1946.1k, avg=25426.51, stdev=100175.61 00:27:13.442 clat (msec): min=72, max=5608, avg=2608.70, stdev=1016.44 00:27:13.442 lat (msec): min=165, max=5610, avg=2634.13, stdev=1022.07 00:27:13.442 clat percentiles (msec): 00:27:13.442 | 1.00th=[ 188], 5.00th=[ 885], 10.00th=[ 1301], 20.00th=[ 1838], 00:27:13.442 | 30.00th=[ 1989], 40.00th=[ 2165], 50.00th=[ 2433], 60.00th=[ 2836], 00:27:13.442 | 70.00th=[ 3675], 80.00th=[ 3742], 90.00th=[ 3775], 95.00th=[ 3876], 00:27:13.442 | 99.00th=[ 4077], 99.50th=[ 4077], 99.90th=[ 5604], 99.95th=[ 5604], 00:27:13.442 | 99.99th=[ 5604] 00:27:13.442 bw ( KiB/s): min= 4096, max=79872, per=1.35%, avg=45738.67, stdev=28139.54, samples=12 00:27:13.442 iops : min= 4, max= 78, avg=44.67, stdev=27.48, samples=12 00:27:13.442 lat (msec) : 100=0.25%, 250=1.01%, 500=1.52%, 750=1.26%, 1000=1.77% 00:27:13.442 lat (msec) : 2000=24.24%, >=2000=69.95% 00:27:13.442 cpu : usr=0.04%, sys=1.28%, ctx=1817, majf=0, minf=32769 00:27:13.442 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:27:13.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.442 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:27:13.442 issued rwts: total=396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.442 job5: (groupid=0, jobs=1): err= 0: pid=3423436: Sun Nov 17 16:03:57 2024 00:27:13.442 read: IOPS=47, BW=47.7MiB/s (50.0MB/s)(482MiB/10112msec) 00:27:13.442 slat (usec): min=490, max=619140, avg=20829.13, stdev=36196.61 00:27:13.442 clat (msec): min=69, max=4020, avg=2385.76, stdev=831.59 00:27:13.442 lat (msec): min=134, max=4048, avg=2406.59, stdev=832.25 00:27:13.442 clat percentiles (msec): 00:27:13.442 | 1.00th=[ 144], 5.00th=[ 477], 10.00th=[ 776], 20.00th=[ 1921], 00:27:13.442 | 30.00th=[ 2400], 40.00th=[ 2500], 50.00th=[ 2635], 60.00th=[ 2836], 00:27:13.442 | 70.00th=[ 2937], 80.00th=[ 3004], 90.00th=[ 3071], 95.00th=[ 3171], 00:27:13.442 | 99.00th=[ 3272], 99.50th=[ 4010], 99.90th=[ 4010], 99.95th=[ 4010], 00:27:13.442 | 99.99th=[ 4010] 00:27:13.442 bw ( KiB/s): min=26624, max=79872, per=1.43%, avg=48332.80, stdev=15803.19, samples=15 00:27:13.442 iops : min= 26, max= 78, avg=47.20, stdev=15.43, samples=15 00:27:13.442 lat (msec) : 100=0.21%, 250=2.07%, 500=3.73%, 750=3.73%, 1000=2.28% 00:27:13.442 lat (msec) : 2000=9.54%, >=2000=78.42% 00:27:13.442 cpu : usr=0.02%, sys=1.34%, ctx=2023, majf=0, minf=32769 00:27:13.442 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.3%, 32=6.6%, >=64=86.9% 00:27:13.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:13.442 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:27:13.442 issued rwts: total=482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:13.442 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:13.442 00:27:13.442 Run status group 0 (all jobs): 00:27:13.442 READ: bw=3297MiB/s (3457MB/s), 1918KiB/s-244MiB/s (1964kB/s-256MB/s), io=34.8GiB (37.4GB), run=10018-10813msec 00:27:13.442 00:27:13.442 Disk stats (read/write): 00:27:13.442 nvme0n1: ios=45737/0, merge=0/0, ticks=7481268/0, in_queue=7481268, util=98.17% 00:27:13.442 nvme1n1: ios=52683/0, merge=0/0, ticks=6640733/0, in_queue=6640733, util=98.40% 00:27:13.442 nvme2n1: ios=36695/0, merge=0/0, ticks=7260576/0, in_queue=7260576, util=98.66% 00:27:13.442 nvme3n1: ios=38779/0, merge=0/0, ticks=7122077/0, in_queue=7122077, util=98.70% 00:27:13.442 nvme4n1: ios=45074/0, merge=0/0, ticks=6078426/0, in_queue=6078426, util=99.06% 00:27:13.442 nvme5n1: ios=64735/0, merge=0/0, ticks=6404838/0, in_queue=6404838, util=98.73% 00:27:13.442 16:03:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:27:13.442 16:03:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:27:13.442 16:03:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:13.442 16:03:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:27:13.699 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000000 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000000 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:13.700 16:03:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:15.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000001 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000001 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:15.069 16:04:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:16.000 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000002 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000002 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:16.000 16:04:01 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:16.932 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000003 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000003 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:16.932 16:04:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:17.865 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000004 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000004 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:27:17.865 16:04:03 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:18.796 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:18.796 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:27:18.796 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1223 -- # local i=0 00:27:18.796 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:18.797 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1224 -- # grep -q -w SPDK00000000000005 00:27:18.797 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # grep -q -w SPDK00000000000005 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # return 0 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:19.054 rmmod nvme_rdma 00:27:19.054 rmmod nvme_fabrics 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@517 -- # '[' -n 3421764 ']' 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # killprocess 3421764 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # '[' -z 3421764 ']' 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@958 -- # kill -0 3421764 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # uname 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3421764 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3421764' 00:27:19.054 killing process with pid 3421764 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@973 -- # kill 3421764 00:27:19.054 16:04:04 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@978 -- # wait 3421764 00:27:21.581 16:04:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:21.581 16:04:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:21.581 00:27:21.581 real 0m35.384s 00:27:21.581 user 2m1.086s 00:27:21.581 sys 0m18.081s 00:27:21.581 16:04:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:21.581 16:04:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:27:21.581 ************************************ 00:27:21.581 END TEST nvmf_srq_overwhelm 00:27:21.581 ************************************ 00:27:21.581 16:04:06 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:27:21.581 16:04:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:21.581 16:04:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.581 16:04:06 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:21.581 ************************************ 00:27:21.581 START TEST nvmf_shutdown 00:27:21.581 ************************************ 00:27:21.581 16:04:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:27:21.582 * Looking for test storage... 00:27:21.582 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:27:21.582 16:04:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:21.582 16:04:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:21.582 16:04:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:21.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.582 --rc genhtml_branch_coverage=1 00:27:21.582 --rc genhtml_function_coverage=1 00:27:21.582 --rc genhtml_legend=1 00:27:21.582 --rc geninfo_all_blocks=1 00:27:21.582 --rc geninfo_unexecuted_blocks=1 00:27:21.582 00:27:21.582 ' 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:21.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.582 --rc genhtml_branch_coverage=1 00:27:21.582 --rc genhtml_function_coverage=1 00:27:21.582 --rc genhtml_legend=1 00:27:21.582 --rc geninfo_all_blocks=1 00:27:21.582 --rc geninfo_unexecuted_blocks=1 00:27:21.582 00:27:21.582 ' 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:21.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.582 --rc genhtml_branch_coverage=1 00:27:21.582 --rc genhtml_function_coverage=1 00:27:21.582 --rc genhtml_legend=1 00:27:21.582 --rc geninfo_all_blocks=1 00:27:21.582 --rc geninfo_unexecuted_blocks=1 00:27:21.582 00:27:21.582 ' 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:21.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:21.582 --rc genhtml_branch_coverage=1 00:27:21.582 --rc genhtml_function_coverage=1 00:27:21.582 --rc genhtml_legend=1 00:27:21.582 --rc geninfo_all_blocks=1 00:27:21.582 --rc geninfo_unexecuted_blocks=1 00:27:21.582 00:27:21.582 ' 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:21.582 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:21.583 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:21.583 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:21.840 ************************************ 00:27:21.840 START TEST nvmf_shutdown_tc1 00:27:21.840 ************************************ 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:21.840 16:04:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.399 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.399 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:28.399 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:28.399 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:28.399 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:28.399 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:28.399 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:28.400 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:28.400 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:28.400 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:28.400 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # rdma_device_init 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:28.400 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:28.401 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:28.401 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:28.401 altname enp217s0f0np0 00:27:28.401 altname ens818f0np0 00:27:28.401 inet 192.168.100.8/24 scope global mlx_0_0 00:27:28.401 valid_lft forever preferred_lft forever 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:28.401 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:28.401 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:28.401 altname enp217s0f1np1 00:27:28.401 altname ens818f1np1 00:27:28.401 inet 192.168.100.9/24 scope global mlx_0_1 00:27:28.401 valid_lft forever preferred_lft forever 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:28.401 192.168.100.9' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # head -n 1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:28.401 192.168.100.9' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:28.401 192.168.100.9' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # tail -n +2 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # head -n 1 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3429796 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3429796 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3429796 ']' 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:28.401 16:04:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:28.401 [2024-11-17 16:04:13.818232] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:28.401 [2024-11-17 16:04:13.818334] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.660 [2024-11-17 16:04:13.955907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.660 [2024-11-17 16:04:14.055827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.660 [2024-11-17 16:04:14.055877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.660 [2024-11-17 16:04:14.055889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.660 [2024-11-17 16:04:14.055902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.660 [2024-11-17 16:04:14.055911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.660 [2024-11-17 16:04:14.058407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.660 [2024-11-17 16:04:14.058474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.660 [2024-11-17 16:04:14.058559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.660 [2024-11-17 16:04:14.058606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:29.228 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.228 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:29.228 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:29.228 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:29.228 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.228 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.228 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:29.228 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.228 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.228 [2024-11-17 16:04:14.693880] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7fd46db53940) succeed. 00:27:29.228 [2024-11-17 16:04:14.703370] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7fd46db0d940) succeed. 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:29.487 16:04:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.487 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:29.746 Malloc1 00:27:29.746 [2024-11-17 16:04:15.109922] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:29.746 Malloc2 00:27:29.746 Malloc3 00:27:30.004 Malloc4 00:27:30.004 Malloc5 00:27:30.263 Malloc6 00:27:30.263 Malloc7 00:27:30.263 Malloc8 00:27:30.522 Malloc9 00:27:30.522 Malloc10 00:27:30.522 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.522 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:30.522 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.522 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.522 16:04:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3430229 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3430229 /var/tmp/bdevperf.sock 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3430229 ']' 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:30.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.522 { 00:27:30.522 "params": { 00:27:30.522 "name": "Nvme$subsystem", 00:27:30.522 "trtype": "$TEST_TRANSPORT", 00:27:30.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.522 "adrfam": "ipv4", 00:27:30.522 "trsvcid": "$NVMF_PORT", 00:27:30.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.522 "hdgst": ${hdgst:-false}, 00:27:30.522 "ddgst": ${ddgst:-false} 00:27:30.522 }, 00:27:30.522 "method": "bdev_nvme_attach_controller" 00:27:30.522 } 00:27:30.522 EOF 00:27:30.522 )") 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.522 { 00:27:30.522 "params": { 00:27:30.522 "name": "Nvme$subsystem", 00:27:30.522 "trtype": "$TEST_TRANSPORT", 00:27:30.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.522 "adrfam": "ipv4", 00:27:30.522 "trsvcid": "$NVMF_PORT", 00:27:30.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.522 "hdgst": ${hdgst:-false}, 00:27:30.522 "ddgst": ${ddgst:-false} 00:27:30.522 }, 00:27:30.522 "method": "bdev_nvme_attach_controller" 00:27:30.522 } 00:27:30.522 EOF 00:27:30.522 )") 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.522 { 00:27:30.522 "params": { 00:27:30.522 "name": "Nvme$subsystem", 00:27:30.522 "trtype": "$TEST_TRANSPORT", 00:27:30.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.522 "adrfam": "ipv4", 00:27:30.522 "trsvcid": "$NVMF_PORT", 00:27:30.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.522 "hdgst": ${hdgst:-false}, 00:27:30.522 "ddgst": ${ddgst:-false} 00:27:30.522 }, 00:27:30.522 "method": "bdev_nvme_attach_controller" 00:27:30.522 } 00:27:30.522 EOF 00:27:30.522 )") 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.522 { 00:27:30.522 "params": { 00:27:30.522 "name": "Nvme$subsystem", 00:27:30.522 "trtype": "$TEST_TRANSPORT", 00:27:30.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.522 "adrfam": "ipv4", 00:27:30.522 "trsvcid": "$NVMF_PORT", 00:27:30.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.522 "hdgst": ${hdgst:-false}, 00:27:30.522 "ddgst": ${ddgst:-false} 00:27:30.522 }, 00:27:30.522 "method": "bdev_nvme_attach_controller" 00:27:30.522 } 00:27:30.522 EOF 00:27:30.522 )") 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.522 { 00:27:30.522 "params": { 00:27:30.522 "name": "Nvme$subsystem", 00:27:30.522 "trtype": "$TEST_TRANSPORT", 00:27:30.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.522 "adrfam": "ipv4", 00:27:30.522 "trsvcid": "$NVMF_PORT", 00:27:30.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.522 "hdgst": ${hdgst:-false}, 00:27:30.522 "ddgst": ${ddgst:-false} 00:27:30.522 }, 00:27:30.522 "method": "bdev_nvme_attach_controller" 00:27:30.522 } 00:27:30.522 EOF 00:27:30.522 )") 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.522 { 00:27:30.522 "params": { 00:27:30.522 "name": "Nvme$subsystem", 00:27:30.522 "trtype": "$TEST_TRANSPORT", 00:27:30.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.522 "adrfam": "ipv4", 00:27:30.522 "trsvcid": "$NVMF_PORT", 00:27:30.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.522 "hdgst": ${hdgst:-false}, 00:27:30.522 "ddgst": ${ddgst:-false} 00:27:30.522 }, 00:27:30.522 "method": "bdev_nvme_attach_controller" 00:27:30.522 } 00:27:30.522 EOF 00:27:30.522 )") 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.522 { 00:27:30.522 "params": { 00:27:30.522 "name": "Nvme$subsystem", 00:27:30.522 "trtype": "$TEST_TRANSPORT", 00:27:30.522 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.522 "adrfam": "ipv4", 00:27:30.522 "trsvcid": "$NVMF_PORT", 00:27:30.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.522 "hdgst": ${hdgst:-false}, 00:27:30.522 "ddgst": ${ddgst:-false} 00:27:30.522 }, 00:27:30.522 "method": "bdev_nvme_attach_controller" 00:27:30.522 } 00:27:30.522 EOF 00:27:30.522 )") 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.522 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.523 { 00:27:30.523 "params": { 00:27:30.523 "name": "Nvme$subsystem", 00:27:30.523 "trtype": "$TEST_TRANSPORT", 00:27:30.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.523 "adrfam": "ipv4", 00:27:30.523 "trsvcid": "$NVMF_PORT", 00:27:30.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.523 "hdgst": ${hdgst:-false}, 00:27:30.523 "ddgst": ${ddgst:-false} 00:27:30.523 }, 00:27:30.523 "method": "bdev_nvme_attach_controller" 00:27:30.523 } 00:27:30.523 EOF 00:27:30.523 )") 00:27:30.523 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:30.781 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.781 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.781 { 00:27:30.781 "params": { 00:27:30.781 "name": "Nvme$subsystem", 00:27:30.781 "trtype": "$TEST_TRANSPORT", 00:27:30.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.781 "adrfam": "ipv4", 00:27:30.781 "trsvcid": "$NVMF_PORT", 00:27:30.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.781 "hdgst": ${hdgst:-false}, 00:27:30.781 "ddgst": ${ddgst:-false} 00:27:30.781 }, 00:27:30.781 "method": "bdev_nvme_attach_controller" 00:27:30.781 } 00:27:30.782 EOF 00:27:30.782 )") 00:27:30.782 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:30.782 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:30.782 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:30.782 { 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme$subsystem", 00:27:30.782 "trtype": "$TEST_TRANSPORT", 00:27:30.782 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "$NVMF_PORT", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.782 "hdgst": ${hdgst:-false}, 00:27:30.782 "ddgst": ${ddgst:-false} 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 } 00:27:30.782 EOF 00:27:30.782 )") 00:27:30.782 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:30.782 [2024-11-17 16:04:16.081012] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:30.782 [2024-11-17 16:04:16.081105] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:30.782 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:30.782 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:30.782 16:04:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme1", 00:27:30.782 "trtype": "rdma", 00:27:30.782 "traddr": "192.168.100.8", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "4420", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:30.782 "hdgst": false, 00:27:30.782 "ddgst": false 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 },{ 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme2", 00:27:30.782 "trtype": "rdma", 00:27:30.782 "traddr": "192.168.100.8", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "4420", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:30.782 "hdgst": false, 00:27:30.782 "ddgst": false 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 },{ 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme3", 00:27:30.782 "trtype": "rdma", 00:27:30.782 "traddr": "192.168.100.8", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "4420", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:30.782 "hdgst": false, 00:27:30.782 "ddgst": false 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 },{ 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme4", 00:27:30.782 "trtype": "rdma", 00:27:30.782 "traddr": "192.168.100.8", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "4420", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:30.782 "hdgst": false, 00:27:30.782 "ddgst": false 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 },{ 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme5", 00:27:30.782 "trtype": "rdma", 00:27:30.782 "traddr": "192.168.100.8", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "4420", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:30.782 "hdgst": false, 00:27:30.782 "ddgst": false 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 },{ 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme6", 00:27:30.782 "trtype": "rdma", 00:27:30.782 "traddr": "192.168.100.8", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "4420", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:30.782 "hdgst": false, 00:27:30.782 "ddgst": false 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 },{ 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme7", 00:27:30.782 "trtype": "rdma", 00:27:30.782 "traddr": "192.168.100.8", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "4420", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:30.782 "hdgst": false, 00:27:30.782 "ddgst": false 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 },{ 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme8", 00:27:30.782 "trtype": "rdma", 00:27:30.782 "traddr": "192.168.100.8", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "4420", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:30.782 "hdgst": false, 00:27:30.782 "ddgst": false 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 },{ 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme9", 00:27:30.782 "trtype": "rdma", 00:27:30.782 "traddr": "192.168.100.8", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "4420", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:30.782 "hdgst": false, 00:27:30.782 "ddgst": false 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 },{ 00:27:30.782 "params": { 00:27:30.782 "name": "Nvme10", 00:27:30.782 "trtype": "rdma", 00:27:30.782 "traddr": "192.168.100.8", 00:27:30.782 "adrfam": "ipv4", 00:27:30.782 "trsvcid": "4420", 00:27:30.782 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:30.782 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:30.782 "hdgst": false, 00:27:30.782 "ddgst": false 00:27:30.782 }, 00:27:30.782 "method": "bdev_nvme_attach_controller" 00:27:30.782 }' 00:27:30.782 [2024-11-17 16:04:16.216837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.782 [2024-11-17 16:04:16.320004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:32.158 16:04:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.158 16:04:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:32.158 16:04:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:32.158 16:04:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.158 16:04:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:32.158 16:04:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.158 16:04:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3430229 00:27:32.158 16:04:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:27:32.158 16:04:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:27:33.093 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3430229 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3429796 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.093 { 00:27:33.093 "params": { 00:27:33.093 "name": "Nvme$subsystem", 00:27:33.093 "trtype": "$TEST_TRANSPORT", 00:27:33.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.093 "adrfam": "ipv4", 00:27:33.093 "trsvcid": "$NVMF_PORT", 00:27:33.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.093 "hdgst": ${hdgst:-false}, 00:27:33.093 "ddgst": ${ddgst:-false} 00:27:33.093 }, 00:27:33.093 "method": "bdev_nvme_attach_controller" 00:27:33.093 } 00:27:33.093 EOF 00:27:33.093 )") 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.093 { 00:27:33.093 "params": { 00:27:33.093 "name": "Nvme$subsystem", 00:27:33.093 "trtype": "$TEST_TRANSPORT", 00:27:33.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.093 "adrfam": "ipv4", 00:27:33.093 "trsvcid": "$NVMF_PORT", 00:27:33.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.093 "hdgst": ${hdgst:-false}, 00:27:33.093 "ddgst": ${ddgst:-false} 00:27:33.093 }, 00:27:33.093 "method": "bdev_nvme_attach_controller" 00:27:33.093 } 00:27:33.093 EOF 00:27:33.093 )") 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.093 { 00:27:33.093 "params": { 00:27:33.093 "name": "Nvme$subsystem", 00:27:33.093 "trtype": "$TEST_TRANSPORT", 00:27:33.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.093 "adrfam": "ipv4", 00:27:33.093 "trsvcid": "$NVMF_PORT", 00:27:33.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.093 "hdgst": ${hdgst:-false}, 00:27:33.093 "ddgst": ${ddgst:-false} 00:27:33.093 }, 00:27:33.093 "method": "bdev_nvme_attach_controller" 00:27:33.093 } 00:27:33.093 EOF 00:27:33.093 )") 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.093 { 00:27:33.093 "params": { 00:27:33.093 "name": "Nvme$subsystem", 00:27:33.093 "trtype": "$TEST_TRANSPORT", 00:27:33.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.093 "adrfam": "ipv4", 00:27:33.093 "trsvcid": "$NVMF_PORT", 00:27:33.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.093 "hdgst": ${hdgst:-false}, 00:27:33.093 "ddgst": ${ddgst:-false} 00:27:33.093 }, 00:27:33.093 "method": "bdev_nvme_attach_controller" 00:27:33.093 } 00:27:33.093 EOF 00:27:33.093 )") 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.093 { 00:27:33.093 "params": { 00:27:33.093 "name": "Nvme$subsystem", 00:27:33.093 "trtype": "$TEST_TRANSPORT", 00:27:33.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.093 "adrfam": "ipv4", 00:27:33.093 "trsvcid": "$NVMF_PORT", 00:27:33.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.093 "hdgst": ${hdgst:-false}, 00:27:33.093 "ddgst": ${ddgst:-false} 00:27:33.093 }, 00:27:33.093 "method": "bdev_nvme_attach_controller" 00:27:33.093 } 00:27:33.093 EOF 00:27:33.093 )") 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.093 { 00:27:33.093 "params": { 00:27:33.093 "name": "Nvme$subsystem", 00:27:33.093 "trtype": "$TEST_TRANSPORT", 00:27:33.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.093 "adrfam": "ipv4", 00:27:33.093 "trsvcid": "$NVMF_PORT", 00:27:33.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.093 "hdgst": ${hdgst:-false}, 00:27:33.093 "ddgst": ${ddgst:-false} 00:27:33.093 }, 00:27:33.093 "method": "bdev_nvme_attach_controller" 00:27:33.093 } 00:27:33.093 EOF 00:27:33.093 )") 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.093 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.093 { 00:27:33.093 "params": { 00:27:33.093 "name": "Nvme$subsystem", 00:27:33.093 "trtype": "$TEST_TRANSPORT", 00:27:33.093 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.093 "adrfam": "ipv4", 00:27:33.093 "trsvcid": "$NVMF_PORT", 00:27:33.093 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.093 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.093 "hdgst": ${hdgst:-false}, 00:27:33.093 "ddgst": ${ddgst:-false} 00:27:33.093 }, 00:27:33.093 "method": "bdev_nvme_attach_controller" 00:27:33.093 } 00:27:33.093 EOF 00:27:33.093 )") 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.094 { 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme$subsystem", 00:27:33.094 "trtype": "$TEST_TRANSPORT", 00:27:33.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "$NVMF_PORT", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.094 "hdgst": ${hdgst:-false}, 00:27:33.094 "ddgst": ${ddgst:-false} 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 } 00:27:33.094 EOF 00:27:33.094 )") 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.094 { 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme$subsystem", 00:27:33.094 "trtype": "$TEST_TRANSPORT", 00:27:33.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "$NVMF_PORT", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.094 "hdgst": ${hdgst:-false}, 00:27:33.094 "ddgst": ${ddgst:-false} 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 } 00:27:33.094 EOF 00:27:33.094 )") 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:33.094 { 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme$subsystem", 00:27:33.094 "trtype": "$TEST_TRANSPORT", 00:27:33.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "$NVMF_PORT", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.094 "hdgst": ${hdgst:-false}, 00:27:33.094 "ddgst": ${ddgst:-false} 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 } 00:27:33.094 EOF 00:27:33.094 )") 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:27:33.094 [2024-11-17 16:04:18.476486] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:33.094 [2024-11-17 16:04:18.476583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430599 ] 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:27:33.094 16:04:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme1", 00:27:33.094 "trtype": "rdma", 00:27:33.094 "traddr": "192.168.100.8", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "4420", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:33.094 "hdgst": false, 00:27:33.094 "ddgst": false 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 },{ 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme2", 00:27:33.094 "trtype": "rdma", 00:27:33.094 "traddr": "192.168.100.8", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "4420", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:33.094 "hdgst": false, 00:27:33.094 "ddgst": false 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 },{ 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme3", 00:27:33.094 "trtype": "rdma", 00:27:33.094 "traddr": "192.168.100.8", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "4420", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:33.094 "hdgst": false, 00:27:33.094 "ddgst": false 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 },{ 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme4", 00:27:33.094 "trtype": "rdma", 00:27:33.094 "traddr": "192.168.100.8", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "4420", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:33.094 "hdgst": false, 00:27:33.094 "ddgst": false 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 },{ 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme5", 00:27:33.094 "trtype": "rdma", 00:27:33.094 "traddr": "192.168.100.8", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "4420", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:33.094 "hdgst": false, 00:27:33.094 "ddgst": false 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 },{ 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme6", 00:27:33.094 "trtype": "rdma", 00:27:33.094 "traddr": "192.168.100.8", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "4420", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:33.094 "hdgst": false, 00:27:33.094 "ddgst": false 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 },{ 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme7", 00:27:33.094 "trtype": "rdma", 00:27:33.094 "traddr": "192.168.100.8", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "4420", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:33.094 "hdgst": false, 00:27:33.094 "ddgst": false 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 },{ 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme8", 00:27:33.094 "trtype": "rdma", 00:27:33.094 "traddr": "192.168.100.8", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "4420", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:33.094 "hdgst": false, 00:27:33.094 "ddgst": false 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 },{ 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme9", 00:27:33.094 "trtype": "rdma", 00:27:33.094 "traddr": "192.168.100.8", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "4420", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:33.094 "hdgst": false, 00:27:33.094 "ddgst": false 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 },{ 00:27:33.094 "params": { 00:27:33.094 "name": "Nvme10", 00:27:33.094 "trtype": "rdma", 00:27:33.094 "traddr": "192.168.100.8", 00:27:33.094 "adrfam": "ipv4", 00:27:33.094 "trsvcid": "4420", 00:27:33.094 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:33.094 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:33.094 "hdgst": false, 00:27:33.094 "ddgst": false 00:27:33.094 }, 00:27:33.094 "method": "bdev_nvme_attach_controller" 00:27:33.094 }' 00:27:33.094 [2024-11-17 16:04:18.612560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.353 [2024-11-17 16:04:18.715421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.729 Running I/O for 1 seconds... 00:27:35.665 3089.00 IOPS, 193.06 MiB/s 00:27:35.665 Latency(us) 00:27:35.665 [2024-11-17T15:04:21.209Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.665 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.665 Verification LBA range: start 0x0 length 0x400 00:27:35.665 Nvme1n1 : 1.18 325.88 20.37 0.00 0.00 191607.33 39007.03 224814.69 00:27:35.665 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.665 Verification LBA range: start 0x0 length 0x400 00:27:35.665 Nvme2n1 : 1.18 345.84 21.62 0.00 0.00 176183.51 5059.38 206359.76 00:27:35.665 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.665 Verification LBA range: start 0x0 length 0x400 00:27:35.665 Nvme3n1 : 1.18 347.98 21.75 0.00 0.00 172472.51 12111.05 161900.13 00:27:35.665 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.665 Verification LBA range: start 0x0 length 0x400 00:27:35.665 Nvme4n1 : 1.18 346.74 21.67 0.00 0.00 170446.54 19398.66 153511.53 00:27:35.665 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.665 Verification LBA range: start 0x0 length 0x400 00:27:35.665 Nvme5n1 : 1.18 327.90 20.49 0.00 0.00 175068.41 24326.96 139250.89 00:27:35.665 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.665 Verification LBA range: start 0x0 length 0x400 00:27:35.665 Nvme6n1 : 1.18 331.02 20.69 0.00 0.00 171349.22 22963.81 125829.12 00:27:35.665 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.665 Verification LBA range: start 0x0 length 0x400 00:27:35.665 Nvme7n1 : 1.19 328.26 20.52 0.00 0.00 169838.04 20971.52 114085.07 00:27:35.665 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.665 Verification LBA range: start 0x0 length 0x400 00:27:35.665 Nvme8n1 : 1.19 342.24 21.39 0.00 0.00 162197.92 14365.49 105696.46 00:27:35.665 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.665 Verification LBA range: start 0x0 length 0x400 00:27:35.665 Nvme9n1 : 1.19 375.37 23.46 0.00 0.00 149742.33 4875.88 115762.79 00:27:35.665 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:35.665 Verification LBA range: start 0x0 length 0x400 00:27:35.665 Nvme10n1 : 1.20 321.16 20.07 0.00 0.00 172106.68 6317.67 234881.02 00:27:35.665 [2024-11-17T15:04:21.209Z] =================================================================================================================== 00:27:35.665 [2024-11-17T15:04:21.209Z] Total : 3392.39 212.02 0.00 0.00 170750.67 4875.88 234881.02 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:27:36.601 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:36.860 rmmod nvme_rdma 00:27:36.860 rmmod nvme_fabrics 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3429796 ']' 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3429796 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3429796 ']' 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3429796 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3429796 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3429796' 00:27:36.860 killing process with pid 3429796 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3429796 00:27:36.860 16:04:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3429796 00:27:41.048 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:41.049 00:27:41.049 real 0m18.573s 00:27:41.049 user 0m50.760s 00:27:41.049 sys 0m6.719s 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:41.049 ************************************ 00:27:41.049 END TEST nvmf_shutdown_tc1 00:27:41.049 ************************************ 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:41.049 ************************************ 00:27:41.049 START TEST nvmf_shutdown_tc2 00:27:41.049 ************************************ 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:41.049 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:41.049 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:41.049 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:41.049 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:41.049 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # rdma_device_init 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:41.050 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:41.050 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:41.050 altname enp217s0f0np0 00:27:41.050 altname ens818f0np0 00:27:41.050 inet 192.168.100.8/24 scope global mlx_0_0 00:27:41.050 valid_lft forever preferred_lft forever 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:41.050 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:41.050 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:41.050 altname enp217s0f1np1 00:27:41.050 altname ens818f1np1 00:27:41.050 inet 192.168.100.9/24 scope global mlx_0_1 00:27:41.050 valid_lft forever preferred_lft forever 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:41.050 192.168.100.9' 00:27:41.050 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:41.050 192.168.100.9' 00:27:41.051 16:04:25 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # head -n 1 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:41.051 192.168.100.9' 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # tail -n +2 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # head -n 1 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3431960 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3431960 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3431960 ']' 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.051 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.051 [2024-11-17 16:04:26.135849] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:41.051 [2024-11-17 16:04:26.135940] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.051 [2024-11-17 16:04:26.269944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:41.051 [2024-11-17 16:04:26.370482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.051 [2024-11-17 16:04:26.370529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.051 [2024-11-17 16:04:26.370543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.051 [2024-11-17 16:04:26.370556] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.051 [2024-11-17 16:04:26.370570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.051 [2024-11-17 16:04:26.373095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:41.051 [2024-11-17 16:04:26.373163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:41.051 [2024-11-17 16:04:26.373248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.051 [2024-11-17 16:04:26.373274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:41.618 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.618 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:41.618 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:41.618 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:41.618 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.618 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.618 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:41.618 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.618 16:04:26 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.618 [2024-11-17 16:04:27.021183] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f85a0d6a940) succeed. 00:27:41.618 [2024-11-17 16:04:27.030622] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f85a0d26940) succeed. 00:27:41.877 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.877 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:41.877 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:41.877 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.877 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:41.877 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.878 16:04:27 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.136 Malloc1 00:27:42.136 [2024-11-17 16:04:27.446262] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:42.136 Malloc2 00:27:42.136 Malloc3 00:27:42.395 Malloc4 00:27:42.395 Malloc5 00:27:42.395 Malloc6 00:27:42.653 Malloc7 00:27:42.653 Malloc8 00:27:42.653 Malloc9 00:27:42.912 Malloc10 00:27:42.912 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3432433 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3432433 /var/tmp/bdevperf.sock 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3432433 ']' 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:42.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.913 { 00:27:42.913 "params": { 00:27:42.913 "name": "Nvme$subsystem", 00:27:42.913 "trtype": "$TEST_TRANSPORT", 00:27:42.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.913 "adrfam": "ipv4", 00:27:42.913 "trsvcid": "$NVMF_PORT", 00:27:42.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.913 "hdgst": ${hdgst:-false}, 00:27:42.913 "ddgst": ${ddgst:-false} 00:27:42.913 }, 00:27:42.913 "method": "bdev_nvme_attach_controller" 00:27:42.913 } 00:27:42.913 EOF 00:27:42.913 )") 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.913 { 00:27:42.913 "params": { 00:27:42.913 "name": "Nvme$subsystem", 00:27:42.913 "trtype": "$TEST_TRANSPORT", 00:27:42.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.913 "adrfam": "ipv4", 00:27:42.913 "trsvcid": "$NVMF_PORT", 00:27:42.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.913 "hdgst": ${hdgst:-false}, 00:27:42.913 "ddgst": ${ddgst:-false} 00:27:42.913 }, 00:27:42.913 "method": "bdev_nvme_attach_controller" 00:27:42.913 } 00:27:42.913 EOF 00:27:42.913 )") 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.913 { 00:27:42.913 "params": { 00:27:42.913 "name": "Nvme$subsystem", 00:27:42.913 "trtype": "$TEST_TRANSPORT", 00:27:42.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.913 "adrfam": "ipv4", 00:27:42.913 "trsvcid": "$NVMF_PORT", 00:27:42.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.913 "hdgst": ${hdgst:-false}, 00:27:42.913 "ddgst": ${ddgst:-false} 00:27:42.913 }, 00:27:42.913 "method": "bdev_nvme_attach_controller" 00:27:42.913 } 00:27:42.913 EOF 00:27:42.913 )") 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.913 { 00:27:42.913 "params": { 00:27:42.913 "name": "Nvme$subsystem", 00:27:42.913 "trtype": "$TEST_TRANSPORT", 00:27:42.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.913 "adrfam": "ipv4", 00:27:42.913 "trsvcid": "$NVMF_PORT", 00:27:42.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.913 "hdgst": ${hdgst:-false}, 00:27:42.913 "ddgst": ${ddgst:-false} 00:27:42.913 }, 00:27:42.913 "method": "bdev_nvme_attach_controller" 00:27:42.913 } 00:27:42.913 EOF 00:27:42.913 )") 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.913 { 00:27:42.913 "params": { 00:27:42.913 "name": "Nvme$subsystem", 00:27:42.913 "trtype": "$TEST_TRANSPORT", 00:27:42.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.913 "adrfam": "ipv4", 00:27:42.913 "trsvcid": "$NVMF_PORT", 00:27:42.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.913 "hdgst": ${hdgst:-false}, 00:27:42.913 "ddgst": ${ddgst:-false} 00:27:42.913 }, 00:27:42.913 "method": "bdev_nvme_attach_controller" 00:27:42.913 } 00:27:42.913 EOF 00:27:42.913 )") 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.913 { 00:27:42.913 "params": { 00:27:42.913 "name": "Nvme$subsystem", 00:27:42.913 "trtype": "$TEST_TRANSPORT", 00:27:42.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.913 "adrfam": "ipv4", 00:27:42.913 "trsvcid": "$NVMF_PORT", 00:27:42.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.913 "hdgst": ${hdgst:-false}, 00:27:42.913 "ddgst": ${ddgst:-false} 00:27:42.913 }, 00:27:42.913 "method": "bdev_nvme_attach_controller" 00:27:42.913 } 00:27:42.913 EOF 00:27:42.913 )") 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.913 { 00:27:42.913 "params": { 00:27:42.913 "name": "Nvme$subsystem", 00:27:42.913 "trtype": "$TEST_TRANSPORT", 00:27:42.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.913 "adrfam": "ipv4", 00:27:42.913 "trsvcid": "$NVMF_PORT", 00:27:42.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.913 "hdgst": ${hdgst:-false}, 00:27:42.913 "ddgst": ${ddgst:-false} 00:27:42.913 }, 00:27:42.913 "method": "bdev_nvme_attach_controller" 00:27:42.913 } 00:27:42.913 EOF 00:27:42.913 )") 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.913 { 00:27:42.913 "params": { 00:27:42.913 "name": "Nvme$subsystem", 00:27:42.913 "trtype": "$TEST_TRANSPORT", 00:27:42.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.913 "adrfam": "ipv4", 00:27:42.913 "trsvcid": "$NVMF_PORT", 00:27:42.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.913 "hdgst": ${hdgst:-false}, 00:27:42.913 "ddgst": ${ddgst:-false} 00:27:42.913 }, 00:27:42.913 "method": "bdev_nvme_attach_controller" 00:27:42.913 } 00:27:42.913 EOF 00:27:42.913 )") 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.913 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.913 { 00:27:42.913 "params": { 00:27:42.913 "name": "Nvme$subsystem", 00:27:42.913 "trtype": "$TEST_TRANSPORT", 00:27:42.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.913 "adrfam": "ipv4", 00:27:42.913 "trsvcid": "$NVMF_PORT", 00:27:42.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.913 "hdgst": ${hdgst:-false}, 00:27:42.914 "ddgst": ${ddgst:-false} 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 } 00:27:42.914 EOF 00:27:42.914 )") 00:27:42.914 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:42.914 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:42.914 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:42.914 { 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme$subsystem", 00:27:42.914 "trtype": "$TEST_TRANSPORT", 00:27:42.914 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "$NVMF_PORT", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.914 "hdgst": ${hdgst:-false}, 00:27:42.914 "ddgst": ${ddgst:-false} 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 } 00:27:42.914 EOF 00:27:42.914 )") 00:27:42.914 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:27:42.914 [2024-11-17 16:04:28.404904] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:42.914 [2024-11-17 16:04:28.404994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432433 ] 00:27:42.914 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:27:42.914 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:27:42.914 16:04:28 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme1", 00:27:42.914 "trtype": "rdma", 00:27:42.914 "traddr": "192.168.100.8", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "4420", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:42.914 "hdgst": false, 00:27:42.914 "ddgst": false 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 },{ 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme2", 00:27:42.914 "trtype": "rdma", 00:27:42.914 "traddr": "192.168.100.8", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "4420", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:42.914 "hdgst": false, 00:27:42.914 "ddgst": false 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 },{ 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme3", 00:27:42.914 "trtype": "rdma", 00:27:42.914 "traddr": "192.168.100.8", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "4420", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:42.914 "hdgst": false, 00:27:42.914 "ddgst": false 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 },{ 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme4", 00:27:42.914 "trtype": "rdma", 00:27:42.914 "traddr": "192.168.100.8", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "4420", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:42.914 "hdgst": false, 00:27:42.914 "ddgst": false 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 },{ 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme5", 00:27:42.914 "trtype": "rdma", 00:27:42.914 "traddr": "192.168.100.8", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "4420", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:42.914 "hdgst": false, 00:27:42.914 "ddgst": false 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 },{ 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme6", 00:27:42.914 "trtype": "rdma", 00:27:42.914 "traddr": "192.168.100.8", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "4420", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:42.914 "hdgst": false, 00:27:42.914 "ddgst": false 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 },{ 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme7", 00:27:42.914 "trtype": "rdma", 00:27:42.914 "traddr": "192.168.100.8", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "4420", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:42.914 "hdgst": false, 00:27:42.914 "ddgst": false 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 },{ 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme8", 00:27:42.914 "trtype": "rdma", 00:27:42.914 "traddr": "192.168.100.8", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "4420", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:42.914 "hdgst": false, 00:27:42.914 "ddgst": false 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 },{ 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme9", 00:27:42.914 "trtype": "rdma", 00:27:42.914 "traddr": "192.168.100.8", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "4420", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:42.914 "hdgst": false, 00:27:42.914 "ddgst": false 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 },{ 00:27:42.914 "params": { 00:27:42.914 "name": "Nvme10", 00:27:42.914 "trtype": "rdma", 00:27:42.914 "traddr": "192.168.100.8", 00:27:42.914 "adrfam": "ipv4", 00:27:42.914 "trsvcid": "4420", 00:27:42.914 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:42.914 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:42.914 "hdgst": false, 00:27:42.914 "ddgst": false 00:27:42.914 }, 00:27:42.914 "method": "bdev_nvme_attach_controller" 00:27:42.914 }' 00:27:43.173 [2024-11-17 16:04:28.541116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.173 [2024-11-17 16:04:28.651282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.553 Running I/O for 10 seconds... 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.553 16:04:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.811 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.811 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=51 00:27:44.811 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 51 -ge 100 ']' 00:27:44.811 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:45.069 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3432433 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3432433 ']' 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3432433 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3432433 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3432433' 00:27:45.070 killing process with pid 3432433 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3432433 00:27:45.070 16:04:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3432433 00:27:45.328 Received shutdown signal, test time was about 0.872759 seconds 00:27:45.328 00:27:45.328 Latency(us) 00:27:45.328 [2024-11-17T15:04:30.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:45.328 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.328 Verification LBA range: start 0x0 length 0x400 00:27:45.328 Nvme1n1 : 0.85 374.67 23.42 0.00 0.00 166840.69 4692.38 187065.96 00:27:45.328 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.328 Verification LBA range: start 0x0 length 0x400 00:27:45.328 Nvme2n1 : 0.86 374.09 23.38 0.00 0.00 164102.06 9122.61 182871.65 00:27:45.328 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.328 Verification LBA range: start 0x0 length 0x400 00:27:45.328 Nvme3n1 : 0.86 373.44 23.34 0.00 0.00 161338.65 9279.90 175321.91 00:27:45.328 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.328 Verification LBA range: start 0x0 length 0x400 00:27:45.329 Nvme4n1 : 0.86 372.77 23.30 0.00 0.00 158423.45 9594.47 166933.30 00:27:45.329 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.329 Verification LBA range: start 0x0 length 0x400 00:27:45.329 Nvme5n1 : 0.86 371.78 23.24 0.00 0.00 156548.14 10643.05 151833.80 00:27:45.329 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.329 Verification LBA range: start 0x0 length 0x400 00:27:45.329 Nvme6n1 : 0.86 370.82 23.18 0.00 0.00 153710.92 12111.05 135895.45 00:27:45.329 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.329 Verification LBA range: start 0x0 length 0x400 00:27:45.329 Nvme7n1 : 0.87 369.86 23.12 0.00 0.00 150861.25 13421.77 120795.96 00:27:45.329 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.329 Verification LBA range: start 0x0 length 0x400 00:27:45.329 Nvme8n1 : 0.87 368.92 23.06 0.00 0.00 148014.37 14784.92 104857.60 00:27:45.329 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.329 Verification LBA range: start 0x0 length 0x400 00:27:45.329 Nvme9n1 : 0.87 367.98 23.00 0.00 0.00 145166.66 16148.07 110729.63 00:27:45.329 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:45.329 Verification LBA range: start 0x0 length 0x400 00:27:45.329 Nvme10n1 : 0.87 293.64 18.35 0.00 0.00 177544.50 10433.33 226492.42 00:27:45.329 [2024-11-17T15:04:30.873Z] =================================================================================================================== 00:27:45.329 [2024-11-17T15:04:30.873Z] Total : 3637.98 227.37 0.00 0.00 157861.41 4692.38 226492.42 00:27:46.279 16:04:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:27:47.215 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3431960 00:27:47.215 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:27:47.215 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:47.215 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:27:47.474 rmmod nvme_rdma 00:27:47.474 rmmod nvme_fabrics 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3431960 ']' 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3431960 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3431960 ']' 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3431960 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3431960 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3431960' 00:27:47.474 killing process with pid 3431960 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3431960 00:27:47.474 16:04:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3431960 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:27:51.664 00:27:51.664 real 0m10.510s 00:27:51.664 user 0m41.008s 00:27:51.664 sys 0m1.597s 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.664 ************************************ 00:27:51.664 END TEST nvmf_shutdown_tc2 00:27:51.664 ************************************ 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:51.664 ************************************ 00:27:51.664 START TEST nvmf_shutdown_tc3 00:27:51.664 ************************************ 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:51.664 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:51.664 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:51.665 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:51.665 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:51.665 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # rdma_device_init 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:27:51.665 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:51.665 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:51.665 altname enp217s0f0np0 00:27:51.665 altname ens818f0np0 00:27:51.665 inet 192.168.100.8/24 scope global mlx_0_0 00:27:51.665 valid_lft forever preferred_lft forever 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:27:51.665 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:51.665 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:51.665 altname enp217s0f1np1 00:27:51.665 altname ens818f1np1 00:27:51.665 inet 192.168.100.9/24 scope global mlx_0_1 00:27:51.665 valid_lft forever preferred_lft forever 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:27:51.665 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:27:51.666 192.168.100.9' 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:27:51.666 192.168.100.9' 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # head -n 1 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:27:51.666 192.168.100.9' 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # tail -n +2 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # head -n 1 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3433989 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3433989 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3433989 ']' 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.666 16:04:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.666 [2024-11-17 16:04:36.752940] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:51.666 [2024-11-17 16:04:36.753030] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.666 [2024-11-17 16:04:36.885964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.666 [2024-11-17 16:04:36.982858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.666 [2024-11-17 16:04:36.982904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.666 [2024-11-17 16:04:36.982916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.666 [2024-11-17 16:04:36.982928] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.666 [2024-11-17 16:04:36.982937] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.666 [2024-11-17 16:04:36.985287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.666 [2024-11-17 16:04:36.985359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.666 [2024-11-17 16:04:36.985441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.666 [2024-11-17 16:04:36.985466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:52.233 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.233 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:27:52.233 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:52.233 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:52.233 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.233 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.233 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:52.233 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.233 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.233 [2024-11-17 16:04:37.627357] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7ffa8ab5c940) succeed. 00:27:52.233 [2024-11-17 16:04:37.636898] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7ffa8ab18940) succeed. 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.538 16:04:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.538 Malloc1 00:27:52.538 [2024-11-17 16:04:38.057390] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:52.847 Malloc2 00:27:52.847 Malloc3 00:27:52.847 Malloc4 00:27:53.165 Malloc5 00:27:53.165 Malloc6 00:27:53.165 Malloc7 00:27:53.165 Malloc8 00:27:53.423 Malloc9 00:27:53.423 Malloc10 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3434319 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3434319 /var/tmp/bdevperf.sock 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3434319 ']' 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.423 { 00:27:53.423 "params": { 00:27:53.423 "name": "Nvme$subsystem", 00:27:53.423 "trtype": "$TEST_TRANSPORT", 00:27:53.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.423 "adrfam": "ipv4", 00:27:53.423 "trsvcid": "$NVMF_PORT", 00:27:53.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.423 "hdgst": ${hdgst:-false}, 00:27:53.423 "ddgst": ${ddgst:-false} 00:27:53.423 }, 00:27:53.423 "method": "bdev_nvme_attach_controller" 00:27:53.423 } 00:27:53.423 EOF 00:27:53.423 )") 00:27:53.423 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.682 { 00:27:53.682 "params": { 00:27:53.682 "name": "Nvme$subsystem", 00:27:53.682 "trtype": "$TEST_TRANSPORT", 00:27:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.682 "adrfam": "ipv4", 00:27:53.682 "trsvcid": "$NVMF_PORT", 00:27:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.682 "hdgst": ${hdgst:-false}, 00:27:53.682 "ddgst": ${ddgst:-false} 00:27:53.682 }, 00:27:53.682 "method": "bdev_nvme_attach_controller" 00:27:53.682 } 00:27:53.682 EOF 00:27:53.682 )") 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.682 { 00:27:53.682 "params": { 00:27:53.682 "name": "Nvme$subsystem", 00:27:53.682 "trtype": "$TEST_TRANSPORT", 00:27:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.682 "adrfam": "ipv4", 00:27:53.682 "trsvcid": "$NVMF_PORT", 00:27:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.682 "hdgst": ${hdgst:-false}, 00:27:53.682 "ddgst": ${ddgst:-false} 00:27:53.682 }, 00:27:53.682 "method": "bdev_nvme_attach_controller" 00:27:53.682 } 00:27:53.682 EOF 00:27:53.682 )") 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.682 { 00:27:53.682 "params": { 00:27:53.682 "name": "Nvme$subsystem", 00:27:53.682 "trtype": "$TEST_TRANSPORT", 00:27:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.682 "adrfam": "ipv4", 00:27:53.682 "trsvcid": "$NVMF_PORT", 00:27:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.682 "hdgst": ${hdgst:-false}, 00:27:53.682 "ddgst": ${ddgst:-false} 00:27:53.682 }, 00:27:53.682 "method": "bdev_nvme_attach_controller" 00:27:53.682 } 00:27:53.682 EOF 00:27:53.682 )") 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.682 { 00:27:53.682 "params": { 00:27:53.682 "name": "Nvme$subsystem", 00:27:53.682 "trtype": "$TEST_TRANSPORT", 00:27:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.682 "adrfam": "ipv4", 00:27:53.682 "trsvcid": "$NVMF_PORT", 00:27:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.682 "hdgst": ${hdgst:-false}, 00:27:53.682 "ddgst": ${ddgst:-false} 00:27:53.682 }, 00:27:53.682 "method": "bdev_nvme_attach_controller" 00:27:53.682 } 00:27:53.682 EOF 00:27:53.682 )") 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.682 16:04:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.682 { 00:27:53.682 "params": { 00:27:53.682 "name": "Nvme$subsystem", 00:27:53.682 "trtype": "$TEST_TRANSPORT", 00:27:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.682 "adrfam": "ipv4", 00:27:53.682 "trsvcid": "$NVMF_PORT", 00:27:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.682 "hdgst": ${hdgst:-false}, 00:27:53.682 "ddgst": ${ddgst:-false} 00:27:53.682 }, 00:27:53.682 "method": "bdev_nvme_attach_controller" 00:27:53.682 } 00:27:53.682 EOF 00:27:53.682 )") 00:27:53.682 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:53.682 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.682 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.682 { 00:27:53.682 "params": { 00:27:53.682 "name": "Nvme$subsystem", 00:27:53.682 "trtype": "$TEST_TRANSPORT", 00:27:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.682 "adrfam": "ipv4", 00:27:53.682 "trsvcid": "$NVMF_PORT", 00:27:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.682 "hdgst": ${hdgst:-false}, 00:27:53.682 "ddgst": ${ddgst:-false} 00:27:53.682 }, 00:27:53.682 "method": "bdev_nvme_attach_controller" 00:27:53.682 } 00:27:53.682 EOF 00:27:53.682 )") 00:27:53.682 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:53.682 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.682 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.682 { 00:27:53.682 "params": { 00:27:53.682 "name": "Nvme$subsystem", 00:27:53.682 "trtype": "$TEST_TRANSPORT", 00:27:53.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.682 "adrfam": "ipv4", 00:27:53.682 "trsvcid": "$NVMF_PORT", 00:27:53.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.682 "hdgst": ${hdgst:-false}, 00:27:53.682 "ddgst": ${ddgst:-false} 00:27:53.682 }, 00:27:53.682 "method": "bdev_nvme_attach_controller" 00:27:53.682 } 00:27:53.682 EOF 00:27:53.682 )") 00:27:53.682 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:53.682 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.683 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.683 { 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme$subsystem", 00:27:53.683 "trtype": "$TEST_TRANSPORT", 00:27:53.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "$NVMF_PORT", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.683 "hdgst": ${hdgst:-false}, 00:27:53.683 "ddgst": ${ddgst:-false} 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 } 00:27:53.683 EOF 00:27:53.683 )") 00:27:53.683 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:53.683 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:53.683 [2024-11-17 16:04:39.033304] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:53.683 [2024-11-17 16:04:39.033398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434319 ] 00:27:53.683 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:53.683 { 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme$subsystem", 00:27:53.683 "trtype": "$TEST_TRANSPORT", 00:27:53.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "$NVMF_PORT", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.683 "hdgst": ${hdgst:-false}, 00:27:53.683 "ddgst": ${ddgst:-false} 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 } 00:27:53.683 EOF 00:27:53.683 )") 00:27:53.683 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:27:53.683 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:27:53.683 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:27:53.683 16:04:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme1", 00:27:53.683 "trtype": "rdma", 00:27:53.683 "traddr": "192.168.100.8", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "4420", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.683 "hdgst": false, 00:27:53.683 "ddgst": false 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 },{ 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme2", 00:27:53.683 "trtype": "rdma", 00:27:53.683 "traddr": "192.168.100.8", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "4420", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:53.683 "hdgst": false, 00:27:53.683 "ddgst": false 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 },{ 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme3", 00:27:53.683 "trtype": "rdma", 00:27:53.683 "traddr": "192.168.100.8", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "4420", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:53.683 "hdgst": false, 00:27:53.683 "ddgst": false 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 },{ 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme4", 00:27:53.683 "trtype": "rdma", 00:27:53.683 "traddr": "192.168.100.8", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "4420", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:53.683 "hdgst": false, 00:27:53.683 "ddgst": false 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 },{ 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme5", 00:27:53.683 "trtype": "rdma", 00:27:53.683 "traddr": "192.168.100.8", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "4420", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:53.683 "hdgst": false, 00:27:53.683 "ddgst": false 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 },{ 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme6", 00:27:53.683 "trtype": "rdma", 00:27:53.683 "traddr": "192.168.100.8", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "4420", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:53.683 "hdgst": false, 00:27:53.683 "ddgst": false 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 },{ 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme7", 00:27:53.683 "trtype": "rdma", 00:27:53.683 "traddr": "192.168.100.8", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "4420", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:53.683 "hdgst": false, 00:27:53.683 "ddgst": false 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 },{ 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme8", 00:27:53.683 "trtype": "rdma", 00:27:53.683 "traddr": "192.168.100.8", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "4420", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:53.683 "hdgst": false, 00:27:53.683 "ddgst": false 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 },{ 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme9", 00:27:53.683 "trtype": "rdma", 00:27:53.683 "traddr": "192.168.100.8", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "4420", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:53.683 "hdgst": false, 00:27:53.683 "ddgst": false 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 },{ 00:27:53.683 "params": { 00:27:53.683 "name": "Nvme10", 00:27:53.683 "trtype": "rdma", 00:27:53.683 "traddr": "192.168.100.8", 00:27:53.683 "adrfam": "ipv4", 00:27:53.683 "trsvcid": "4420", 00:27:53.683 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:53.683 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:53.683 "hdgst": false, 00:27:53.683 "ddgst": false 00:27:53.683 }, 00:27:53.683 "method": "bdev_nvme_attach_controller" 00:27:53.683 }' 00:27:53.683 [2024-11-17 16:04:39.169197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.941 [2024-11-17 16:04:39.270655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.319 Running I/O for 10 seconds... 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:55.319 16:04:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:55.578 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:55.578 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:55.578 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.578 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.578 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.578 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=153 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 153 -ge 100 ']' 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3433989 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3433989 ']' 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3433989 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3433989 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3433989' 00:27:55.836 killing process with pid 3433989 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3433989 00:27:55.836 16:04:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3433989 00:27:57.034 2561.00 IOPS, 160.06 MiB/s [2024-11-17T15:04:42.578Z] [2024-11-17 16:04:42.291512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.291583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.291604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.291617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.291630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.291643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.291655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.291667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.293905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:57.034 [2024-11-17 16:04:42.293942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:57.034 [2024-11-17 16:04:42.293979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.293995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.294008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.294024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.294038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.294050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.294063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.294074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.296072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:57.034 [2024-11-17 16:04:42.296093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:27:57.034 [2024-11-17 16:04:42.296116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.296130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32754 cdw0:0 sqhd:60a0 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.296143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.296155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32754 cdw0:0 sqhd:60a0 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.296168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.296180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32754 cdw0:0 sqhd:60a0 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.296193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.296204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32754 cdw0:0 sqhd:60a0 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.298173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:57.034 [2024-11-17 16:04:42.298192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:27:57.034 [2024-11-17 16:04:42.298214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.298228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.298240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.298253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.298265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.298277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.298290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.298301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.300719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:57.034 [2024-11-17 16:04:42.300738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:57.034 [2024-11-17 16:04:42.300762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.300775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.300788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.300800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.300813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.300825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.300837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.300849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.302787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:57.034 [2024-11-17 16:04:42.302806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:57.034 [2024-11-17 16:04:42.302828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.302841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.302854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.302866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.302879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.302891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.302904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.302916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.304986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:57.034 [2024-11-17 16:04:42.305009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:27:57.034 [2024-11-17 16:04:42.305037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.305054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.305071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.305087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.305107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.305123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.305139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.305155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.307556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:57.034 [2024-11-17 16:04:42.307584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:57.034 [2024-11-17 16:04:42.307613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.307630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.307647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.307663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.034 [2024-11-17 16:04:42.307679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.034 [2024-11-17 16:04:42.307695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.307711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.035 [2024-11-17 16:04:42.307726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.310039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:57.035 [2024-11-17 16:04:42.310062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:27:57.035 [2024-11-17 16:04:42.310088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.035 [2024-11-17 16:04:42.310105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.310122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.035 [2024-11-17 16:04:42.310138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.310154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.035 [2024-11-17 16:04:42.310171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.310187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.035 [2024-11-17 16:04:42.310203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.312658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:57.035 [2024-11-17 16:04:42.312684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:27:57.035 [2024-11-17 16:04:42.312711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.035 [2024-11-17 16:04:42.312729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.312754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.035 [2024-11-17 16:04:42.312770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.312787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.035 [2024-11-17 16:04:42.312802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.312819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.035 [2024-11-17 16:04:42.312835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.315364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:57.035 [2024-11-17 16:04:42.315388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:27:57.035 [2024-11-17 16:04:42.317903] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:27:57.035 [2024-11-17 16:04:42.320531] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:27:57.035 [2024-11-17 16:04:42.322970] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:27:57.035 [2024-11-17 16:04:42.325503] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:27:57.035 [2024-11-17 16:04:42.327976] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:27:57.035 [2024-11-17 16:04:42.330352] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:27:57.035 [2024-11-17 16:04:42.333297] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:27:57.035 [2024-11-17 16:04:42.335710] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:27:57.035 [2024-11-17 16:04:42.335819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002adf300 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.335848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.335879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002acf240 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.335898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.335922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002abf180 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.335941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.335967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aaf0c0 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.335986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a9f000 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.336026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a8ef40 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.336067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a7ee80 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.336109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a6edc0 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.336149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a5ed00 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.336189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a4ec40 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.336228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a3eb80 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.336268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a2eac0 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.336308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a1ea00 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.336348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002a0e940 len:0x10000 key:0x183b00 00:27:57.035 [2024-11-17 16:04:42.336387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002deffc0 len:0x10000 key:0x184000 00:27:57.035 [2024-11-17 16:04:42.336428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ddff00 len:0x10000 key:0x184000 00:27:57.035 [2024-11-17 16:04:42.336468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dcfe40 len:0x10000 key:0x184000 00:27:57.035 [2024-11-17 16:04:42.336508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dbfd80 len:0x10000 key:0x184000 00:27:57.035 [2024-11-17 16:04:42.336548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002dafcc0 len:0x10000 key:0x184000 00:27:57.035 [2024-11-17 16:04:42.336593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d9fc00 len:0x10000 key:0x184000 00:27:57.035 [2024-11-17 16:04:42.336635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d8fb40 len:0x10000 key:0x184000 00:27:57.035 [2024-11-17 16:04:42.336675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.035 [2024-11-17 16:04:42.336698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d7fa80 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.336715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.336738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d6f9c0 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.336755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.336778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d5f900 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.336795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.336817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d4f840 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.336834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.336856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d3f780 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.336879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.336902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d2f6c0 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.336920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.336942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d1f600 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.336959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.336982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002d0f540 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cff480 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cef3c0 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cdf300 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ccf240 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002cbf180 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002caf0c0 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c9f000 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c8ef40 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c7ee80 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c6edc0 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c5ed00 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c4ec40 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c3eb80 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c2eac0 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c1ea00 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002c0e940 len:0x10000 key:0x184000 00:27:57.036 [2024-11-17 16:04:42.337679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002feffc0 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.337720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fdff00 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.337760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fcfe40 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.337802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fbfd80 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.337845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002fafcc0 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.337889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f9fc00 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.337929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f8fb40 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.337969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.337991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f7fa80 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.338008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.338030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f6f9c0 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.338048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.338070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f5f900 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.338088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.338110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f4f840 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.338127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.338159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f3f780 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.338176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.036 [2024-11-17 16:04:42.338199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f2f6c0 len:0x10000 key:0x183e00 00:27:57.036 [2024-11-17 16:04:42.338217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.338239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f1f600 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.338257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.338279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002f0f540 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.338296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.338322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eff480 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.338339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.338361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eef3c0 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.338378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.338400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002edf300 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.338417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.338439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002aef3c0 len:0x10000 key:0x183b00 00:27:57.037 [2024-11-17 16:04:42.338457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.341772] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:27:57.037 [2024-11-17 16:04:42.341810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ebf180 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.341830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.341860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002eaf0c0 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.341878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.341902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e9f000 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.341920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.341943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e8ef40 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.341960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.341983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e7ee80 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.342001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e6edc0 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.342041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e5ed00 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.342080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e4ec40 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.342124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e3eb80 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.342164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e2eac0 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.342204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e1ea00 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.342244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002e0e940 len:0x10000 key:0x183e00 00:27:57.037 [2024-11-17 16:04:42.342283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031effc0 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031dff00 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031cfe40 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031bfd80 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010031afcc0 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100319fc00 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100318fb40 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100317fa80 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100316f9c0 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100315f900 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100314f840 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100313f780 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100312f6c0 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100311f600 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100310f540 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ff480 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.342981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030ef3c0 len:0x10000 key:0x184300 00:27:57.037 [2024-11-17 16:04:42.342999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.037 [2024-11-17 16:04:42.343021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030df300 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030cf240 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030bf180 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010030af0c0 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100309f000 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100308ef40 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100307ee80 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100306edc0 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100305ed00 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100304ec40 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100303eb80 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100302eac0 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100301ea00 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100300e940 len:0x10000 key:0x184300 00:27:57.038 [2024-11-17 16:04:42.343553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033effc0 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033dff00 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033cfe40 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033bfd80 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010033afcc0 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100339fc00 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100338fb40 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100337fa80 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100336f9c0 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100335f900 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100334f840 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.343997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.344026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100333f780 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.344044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.344066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100332f6c0 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.344119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.344142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100331f600 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.344162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.344185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20100330f540 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.344203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.344225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ff480 len:0x10000 key:0x184700 00:27:57.038 [2024-11-17 16:04:42.344243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.038 [2024-11-17 16:04:42.344266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032ef3c0 len:0x10000 key:0x184700 00:27:57.039 [2024-11-17 16:04:42.344283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.039 [2024-11-17 16:04:42.344306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032df300 len:0x10000 key:0x184700 00:27:57.039 [2024-11-17 16:04:42.344323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.039 [2024-11-17 16:04:42.344345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032cf240 len:0x10000 key:0x184700 00:27:57.039 [2024-11-17 16:04:42.344362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.039 [2024-11-17 16:04:42.344384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2010032bf180 len:0x10000 key:0x184700 00:27:57.039 [2024-11-17 16:04:42.344400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.039 [2024-11-17 16:04:42.344423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x201002ecf240 len:0x10000 key:0x183e00 00:27:57.039 [2024-11-17 16:04:42.344439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.039 [2024-11-17 16:04:42.376466] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.376555] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.376582] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.376600] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.376615] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.376631] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.376647] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.376662] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.376677] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.376693] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.376708] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:27:57.039 [2024-11-17 16:04:42.383642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:57.039 [2024-11-17 16:04:42.383674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:27:57.039 [2024-11-17 16:04:42.384512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:57.039 [2024-11-17 16:04:42.384541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:27:57.039 [2024-11-17 16:04:42.384556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:27:57.039 [2024-11-17 16:04:42.384576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:27:57.039 [2024-11-17 16:04:42.387648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:57.039 task offset: 35584 on job bdev=Nvme1n1 fails 00:27:57.039 00:27:57.039 Latency(us) 00:27:57.039 [2024-11-17T15:04:42.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.039 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.039 Job: Nvme1n1 ended in about 1.95 seconds with error 00:27:57.039 Verification LBA range: start 0x0 length 0x400 00:27:57.039 Nvme1n1 : 1.95 131.31 8.21 32.83 0.00 386435.65 27892.12 1060320.05 00:27:57.039 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.039 Job: Nvme2n1 ended in about 1.95 seconds with error 00:27:57.039 Verification LBA range: start 0x0 length 0x400 00:27:57.039 Nvme2n1 : 1.95 131.25 8.20 32.81 0.00 382913.41 30408.70 1060320.05 00:27:57.039 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.039 Job: Nvme3n1 ended in about 1.95 seconds with error 00:27:57.039 Verification LBA range: start 0x0 length 0x400 00:27:57.039 Nvme3n1 : 1.95 135.29 8.46 32.80 0.00 370445.60 5111.81 1060320.05 00:27:57.039 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.039 Job: Nvme4n1 ended in about 1.95 seconds with error 00:27:57.039 Verification LBA range: start 0x0 length 0x400 00:27:57.039 Nvme4n1 : 1.95 142.40 8.90 32.78 0.00 352082.20 5452.60 1053609.16 00:27:57.039 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.039 Job: Nvme5n1 ended in about 1.95 seconds with error 00:27:57.039 Verification LBA range: start 0x0 length 0x400 00:27:57.039 Nvme5n1 : 1.95 134.66 8.42 32.77 0.00 365202.59 10643.05 1053609.16 00:27:57.039 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.039 Job: Nvme6n1 ended in about 1.95 seconds with error 00:27:57.039 Verification LBA range: start 0x0 length 0x400 00:27:57.039 Nvme6n1 : 1.95 131.02 8.19 32.76 0.00 370129.63 54945.38 1053609.16 00:27:57.039 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.039 Job: Nvme7n1 ended in about 1.95 seconds with error 00:27:57.039 Verification LBA range: start 0x0 length 0x400 00:27:57.039 Nvme7n1 : 1.95 140.17 8.76 32.74 0.00 347484.44 14050.92 1053609.16 00:27:57.039 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.039 Job: Nvme8n1 ended in about 1.96 seconds with error 00:27:57.039 Verification LBA range: start 0x0 length 0x400 00:27:57.039 Nvme8n1 : 1.96 136.02 8.50 32.73 0.00 352630.39 18979.23 1046898.28 00:27:57.039 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.039 Job: Nvme9n1 ended in about 1.91 seconds with error 00:27:57.039 Verification LBA range: start 0x0 length 0x400 00:27:57.039 Nvme9n1 : 1.91 133.95 8.37 33.49 0.00 353016.22 53687.09 1080452.71 00:27:57.039 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:57.039 Job: Nvme10n1 ended in about 1.92 seconds with error 00:27:57.039 Verification LBA range: start 0x0 length 0x400 00:27:57.039 Nvme10n1 : 1.92 100.15 6.26 33.38 0.00 438434.20 58300.83 1073741.82 00:27:57.039 [2024-11-17T15:04:42.583Z] =================================================================================================================== 00:27:57.039 [2024-11-17T15:04:42.583Z] Total : 1316.23 82.26 329.08 0.00 370195.19 5111.81 1080452.71 00:27:57.039 [2024-11-17 16:04:42.513542] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:57.039 [2024-11-17 16:04:42.513611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:27:57.039 [2024-11-17 16:04:42.513644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:27:57.039 [2024-11-17 16:04:42.513666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:27:57.039 [2024-11-17 16:04:42.524501] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.039 [2024-11-17 16:04:42.524532] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.039 [2024-11-17 16:04:42.524548] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:27:57.039 [2024-11-17 16:04:42.524631] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.039 [2024-11-17 16:04:42.524646] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.039 [2024-11-17 16:04:42.524655] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200007fff240 00:27:57.039 [2024-11-17 16:04:42.530338] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.039 [2024-11-17 16:04:42.530370] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.039 [2024-11-17 16:04:42.530385] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fbb200 00:27:57.039 [2024-11-17 16:04:42.530483] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.039 [2024-11-17 16:04:42.530500] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.039 [2024-11-17 16:04:42.530517] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fd3dc0 00:27:57.039 [2024-11-17 16:04:42.530594] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.039 [2024-11-17 16:04:42.530612] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.039 [2024-11-17 16:04:42.530625] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fbe180 00:27:57.039 [2024-11-17 16:04:42.530712] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.039 [2024-11-17 16:04:42.530730] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.039 [2024-11-17 16:04:42.530743] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fc7500 00:27:57.039 [2024-11-17 16:04:42.531506] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.039 [2024-11-17 16:04:42.531529] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.039 [2024-11-17 16:04:42.531542] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016fa3bc0 00:27:57.039 [2024-11-17 16:04:42.531653] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.039 [2024-11-17 16:04:42.531672] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.039 [2024-11-17 16:04:42.531685] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016f83a80 00:27:57.039 [2024-11-17 16:04:42.531816] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.040 [2024-11-17 16:04:42.531834] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.040 [2024-11-17 16:04:42.531846] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016f8ee00 00:27:57.040 [2024-11-17 16:04:42.531970] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:27:57.040 [2024-11-17 16:04:42.531988] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:27:57.040 [2024-11-17 16:04:42.532001] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016f8e680 00:27:58.414 [2024-11-17 16:04:43.529107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:58.414 [2024-11-17 16:04:43.529162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:58.415 [2024-11-17 16:04:43.530779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:58.415 [2024-11-17 16:04:43.530801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:27:58.415 [2024-11-17 16:04:43.530873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:58.415 [2024-11-17 16:04:43.530889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:58.415 [2024-11-17 16:04:43.530903] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:27:58.415 [2024-11-17 16:04:43.530923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:27:58.415 [2024-11-17 16:04:43.530949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:27:58.415 [2024-11-17 16:04:43.530965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:27:58.415 [2024-11-17 16:04:43.530977] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] already in failed state 00:27:58.415 [2024-11-17 16:04:43.530990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:27:58.415 [2024-11-17 16:04:43.534478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:58.415 [2024-11-17 16:04:43.534508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:58.415 [2024-11-17 16:04:43.536087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:58.415 [2024-11-17 16:04:43.536105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:58.415 [2024-11-17 16:04:43.537522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:58.415 [2024-11-17 16:04:43.537540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:27:58.415 [2024-11-17 16:04:43.538795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:58.415 [2024-11-17 16:04:43.538817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:27:58.415 [2024-11-17 16:04:43.540187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:58.415 [2024-11-17 16:04:43.540211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:58.415 [2024-11-17 16:04:43.541677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:58.415 [2024-11-17 16:04:43.541697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:27:58.415 [2024-11-17 16:04:43.542827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:58.415 [2024-11-17 16:04:43.542848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:27:58.415 [2024-11-17 16:04:43.544072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:27:58.415 [2024-11-17 16:04:43.544094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:27:58.415 [2024-11-17 16:04:43.544108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:58.415 [2024-11-17 16:04:43.544125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:58.415 [2024-11-17 16:04:43.544140] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] already in failed state 00:27:58.415 [2024-11-17 16:04:43.544159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:58.415 [2024-11-17 16:04:43.544184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:27:58.415 [2024-11-17 16:04:43.544199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:27:58.415 [2024-11-17 16:04:43.544213] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] already in failed state 00:27:58.415 [2024-11-17 16:04:43.544229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:27:58.415 [2024-11-17 16:04:43.544253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:27:58.415 [2024-11-17 16:04:43.544269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:27:58.415 [2024-11-17 16:04:43.544283] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] already in failed state 00:27:58.415 [2024-11-17 16:04:43.544298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:27:58.415 [2024-11-17 16:04:43.544316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:27:58.415 [2024-11-17 16:04:43.544331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:27:58.415 [2024-11-17 16:04:43.544346] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] already in failed state 00:27:58.415 [2024-11-17 16:04:43.544361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:27:58.415 [2024-11-17 16:04:43.544499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:58.415 [2024-11-17 16:04:43.544517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:58.415 [2024-11-17 16:04:43.544532] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] already in failed state 00:27:58.415 [2024-11-17 16:04:43.544548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:58.415 [2024-11-17 16:04:43.544575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:27:58.415 [2024-11-17 16:04:43.544590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:27:58.415 [2024-11-17 16:04:43.544605] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] already in failed state 00:27:58.415 [2024-11-17 16:04:43.544620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:27:58.415 [2024-11-17 16:04:43.544640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:27:58.415 [2024-11-17 16:04:43.544655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:27:58.415 [2024-11-17 16:04:43.544669] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] already in failed state 00:27:58.415 [2024-11-17 16:04:43.544685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:27:58.415 [2024-11-17 16:04:43.544703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:27:58.415 [2024-11-17 16:04:43.544717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:27:58.415 [2024-11-17 16:04:43.544732] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] already in failed state 00:27:58.415 [2024-11-17 16:04:43.544747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:27:59.349 16:04:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3434319 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3434319 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3434319 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:00.285 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:00.286 rmmod nvme_rdma 00:28:00.286 rmmod nvme_fabrics 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3433989 ']' 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3433989 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3433989 ']' 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3433989 00:28:00.286 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3433989) - No such process 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3433989 is not found' 00:28:00.286 Process with pid 3433989 is not found 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:00.286 00:28:00.286 real 0m9.424s 00:28:00.286 user 0m33.874s 00:28:00.286 sys 0m1.888s 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:00.286 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.286 ************************************ 00:28:00.286 END TEST nvmf_shutdown_tc3 00:28:00.286 ************************************ 00:28:00.544 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:28:00.544 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:00.544 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:00.544 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:00.545 ************************************ 00:28:00.545 START TEST nvmf_shutdown_tc4 00:28:00.545 ************************************ 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:00.545 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:00.545 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:00.545 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:00.545 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:00.545 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # rdma_device_init 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:00.546 16:04:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:00.546 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:00.546 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:00.546 altname enp217s0f0np0 00:28:00.546 altname ens818f0np0 00:28:00.546 inet 192.168.100.8/24 scope global mlx_0_0 00:28:00.546 valid_lft forever preferred_lft forever 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:00.546 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:00.546 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:00.546 altname enp217s0f1np1 00:28:00.546 altname ens818f1np1 00:28:00.546 inet 192.168.100.9/24 scope global mlx_0_1 00:28:00.546 valid_lft forever preferred_lft forever 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:00.546 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:00.804 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:00.805 192.168.100.9' 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:00.805 192.168.100.9' 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # head -n 1 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:00.805 192.168.100.9' 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # tail -n +2 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # head -n 1 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3435672 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3435672 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3435672 ']' 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:00.805 16:04:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:00.805 [2024-11-17 16:04:46.287209] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:00.805 [2024-11-17 16:04:46.287330] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.063 [2024-11-17 16:04:46.423671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:01.063 [2024-11-17 16:04:46.524746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:01.063 [2024-11-17 16:04:46.524794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:01.063 [2024-11-17 16:04:46.524806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:01.063 [2024-11-17 16:04:46.524819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:01.063 [2024-11-17 16:04:46.524829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:01.063 [2024-11-17 16:04:46.527384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:01.063 [2024-11-17 16:04:46.527453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:01.063 [2024-11-17 16:04:46.527535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.063 [2024-11-17 16:04:46.527582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:01.627 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.627 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:01.627 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:01.627 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.627 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:01.627 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:01.627 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:01.627 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.627 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:01.627 [2024-11-17 16:04:47.161166] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000292c0/0x7f8de7ba4940) succeed. 00:28:01.884 [2024-11-17 16:04:47.170586] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029440/0x7f8de7b60940) succeed. 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.141 16:04:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:02.141 Malloc1 00:28:02.141 [2024-11-17 16:04:47.587783] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:02.141 Malloc2 00:28:02.400 Malloc3 00:28:02.400 Malloc4 00:28:02.657 Malloc5 00:28:02.657 Malloc6 00:28:02.657 Malloc7 00:28:02.915 Malloc8 00:28:02.915 Malloc9 00:28:02.915 Malloc10 00:28:02.915 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.915 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:02.915 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:02.915 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:03.173 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3436074 00:28:03.173 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:28:03.173 16:04:48 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:03.173 [2024-11-17 16:04:48.599819] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3435672 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3435672 ']' 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3435672 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3435672 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3435672' 00:28:08.435 killing process with pid 3435672 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3435672 00:28:08.435 16:04:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3435672 00:28:08.435 NVMe io qpair process completion error 00:28:08.435 NVMe io qpair process completion error 00:28:08.435 NVMe io qpair process completion error 00:28:08.435 NVMe io qpair process completion error 00:28:08.435 NVMe io qpair process completion error 00:28:08.435 starting I/O failed: -6 00:28:08.435 NVMe io qpair process completion error 00:28:08.435 NVMe io qpair process completion error 00:28:08.435 NVMe io qpair process completion error 00:28:08.435 NVMe io qpair process completion error 00:28:08.435 NVMe io qpair process completion error 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:08.435 starting I/O failed: -6 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.370 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 [2024-11-17 16:04:54.710369] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Submitting Keep Alive failed 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.371 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 starting I/O failed: -6 00:28:09.372 [2024-11-17 16:04:54.736060] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Submitting Keep Alive failed 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 [2024-11-17 16:04:54.761679] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Submitting Keep Alive failed 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.372 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 [2024-11-17 16:04:54.786727] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Submitting Keep Alive failed 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.373 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 [2024-11-17 16:04:54.809283] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Submitting Keep Alive failed 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 starting I/O failed: -6 00:28:09.374 [2024-11-17 16:04:54.833057] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Submitting Keep Alive failed 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.374 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 starting I/O failed: -6 00:28:09.375 [2024-11-17 16:04:54.860471] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Submitting Keep Alive failed 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.375 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 [2024-11-17 16:04:54.887354] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Submitting Keep Alive failed 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 starting I/O failed: -6 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.376 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 starting I/O failed: -6 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 [2024-11-17 16:04:54.915118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 [2024-11-17 16:04:54.915204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.635 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 Write completed with error (sct=0, sc=8) 00:28:09.636 [2024-11-17 16:04:54.941014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.636 [2024-11-17 16:04:54.941093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:09.636 Initializing NVMe Controllers 00:28:09.636 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:28:09.636 Controller IO queue size 128, less than required. 00:28:09.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.636 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:28:09.636 Controller IO queue size 128, less than required. 00:28:09.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.636 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:28:09.636 Controller IO queue size 128, less than required. 00:28:09.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.636 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:28:09.636 Controller IO queue size 128, less than required. 00:28:09.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.637 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:28:09.637 Controller IO queue size 128, less than required. 00:28:09.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.637 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:28:09.637 Controller IO queue size 128, less than required. 00:28:09.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.637 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:28:09.637 Controller IO queue size 128, less than required. 00:28:09.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.637 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:28:09.637 Controller IO queue size 128, less than required. 00:28:09.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.637 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.637 Controller IO queue size 128, less than required. 00:28:09.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.637 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:28:09.637 Controller IO queue size 128, less than required. 00:28:09.637 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:09.637 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:09.637 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:09.637 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:09.637 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:09.637 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:09.637 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:09.637 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:09.637 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:09.637 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:09.637 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:09.637 Initialization complete. Launching workers. 00:28:09.637 ======================================================== 00:28:09.637 Latency(us) 00:28:09.637 Device Information : IOPS MiB/s Average min max 00:28:09.637 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1385.26 59.52 92447.49 196.29 1239627.44 00:28:09.637 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1390.34 59.74 92345.62 162.00 1228156.71 00:28:09.637 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1400.86 60.19 91890.97 144.51 1219951.84 00:28:09.637 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1408.66 60.53 91639.83 151.53 1260427.92 00:28:09.637 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1414.76 60.79 91489.18 130.26 1281744.63 00:28:09.637 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1436.63 61.73 85152.52 123.66 1165557.98 00:28:09.637 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1384.41 59.49 93966.47 146.86 1347127.85 00:28:09.637 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1417.47 60.91 92056.19 135.39 1291050.97 00:28:09.637 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1358.64 58.38 96322.05 25982.49 1435591.78 00:28:09.637 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1364.06 58.61 96174.44 1955.44 1460730.52 00:28:09.637 ======================================================== 00:28:09.637 Total : 13961.10 599.89 92304.32 123.66 1460730.52 00:28:09.637 00:28:09.637 [2024-11-17 16:04:54.960722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.637 [2024-11-17 16:04:54.960757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:09.637 [2024-11-17 16:04:54.962967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.637 [2024-11-17 16:04:54.962989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:09.637 [2024-11-17 16:04:54.965144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.637 [2024-11-17 16:04:54.965166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:09.637 [2024-11-17 16:04:54.967099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.637 [2024-11-17 16:04:54.967121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:09.637 [2024-11-17 16:04:54.969024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.637 [2024-11-17 16:04:54.969047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:09.637 [2024-11-17 16:04:54.970962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.637 [2024-11-17 16:04:54.970989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:09.637 [2024-11-17 16:04:54.972928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.637 [2024-11-17 16:04:54.972955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:09.637 [2024-11-17 16:04:55.003847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:09.637 [2024-11-17 16:04:55.003874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:09.637 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:12.163 16:04:57 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3436074 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3436074 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3436074 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:12.730 rmmod nvme_rdma 00:28:12.730 rmmod nvme_fabrics 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3435672 ']' 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3435672 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3435672 ']' 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3435672 00:28:12.730 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3435672) - No such process 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3435672 is not found' 00:28:12.730 Process with pid 3435672 is not found 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:12.730 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:12.730 00:28:12.730 real 0m12.290s 00:28:12.730 user 0m46.147s 00:28:12.730 sys 0m1.595s 00:28:12.731 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.731 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:12.731 ************************************ 00:28:12.731 END TEST nvmf_shutdown_tc4 00:28:12.731 ************************************ 00:28:12.731 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:12.731 00:28:12.731 real 0m51.360s 00:28:12.731 user 2m52.021s 00:28:12.731 sys 0m12.170s 00:28:12.731 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.731 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:12.731 ************************************ 00:28:12.731 END TEST nvmf_shutdown 00:28:12.731 ************************************ 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:12.989 ************************************ 00:28:12.989 START TEST nvmf_nsid 00:28:12.989 ************************************ 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=rdma 00:28:12.989 * Looking for test storage... 00:28:12.989 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.989 --rc genhtml_branch_coverage=1 00:28:12.989 --rc genhtml_function_coverage=1 00:28:12.989 --rc genhtml_legend=1 00:28:12.989 --rc geninfo_all_blocks=1 00:28:12.989 --rc geninfo_unexecuted_blocks=1 00:28:12.989 00:28:12.989 ' 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.989 --rc genhtml_branch_coverage=1 00:28:12.989 --rc genhtml_function_coverage=1 00:28:12.989 --rc genhtml_legend=1 00:28:12.989 --rc geninfo_all_blocks=1 00:28:12.989 --rc geninfo_unexecuted_blocks=1 00:28:12.989 00:28:12.989 ' 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.989 --rc genhtml_branch_coverage=1 00:28:12.989 --rc genhtml_function_coverage=1 00:28:12.989 --rc genhtml_legend=1 00:28:12.989 --rc geninfo_all_blocks=1 00:28:12.989 --rc geninfo_unexecuted_blocks=1 00:28:12.989 00:28:12.989 ' 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:12.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.989 --rc genhtml_branch_coverage=1 00:28:12.989 --rc genhtml_function_coverage=1 00:28:12.989 --rc genhtml_legend=1 00:28:12.989 --rc geninfo_all_blocks=1 00:28:12.989 --rc geninfo_unexecuted_blocks=1 00:28:12.989 00:28:12.989 ' 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.989 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.247 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:13.248 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.248 16:04:58 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:19.799 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:19.799 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:19.800 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:19.800 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:19.800 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@448 -- # rdma_device_init 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # uname 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:19.800 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:19.800 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:19.800 altname enp217s0f0np0 00:28:19.800 altname ens818f0np0 00:28:19.800 inet 192.168.100.8/24 scope global mlx_0_0 00:28:19.800 valid_lft forever preferred_lft forever 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:19.800 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:19.800 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:19.800 altname enp217s0f1np1 00:28:19.800 altname ens818f1np1 00:28:19.800 inet 192.168.100.9/24 scope global mlx_0_1 00:28:19.800 valid_lft forever preferred_lft forever 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@109 -- # continue 2 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:19.800 192.168.100.9' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:19.800 192.168.100.9' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # head -n 1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # head -n 1 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:19.800 192.168.100.9' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # tail -n +2 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:19.800 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3440894 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3440894 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3440894 ']' 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:20.059 16:05:05 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:20.059 [2024-11-17 16:05:05.442846] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:20.059 [2024-11-17 16:05:05.442943] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.059 [2024-11-17 16:05:05.574778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.317 [2024-11-17 16:05:05.669762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.317 [2024-11-17 16:05:05.669807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.317 [2024-11-17 16:05:05.669819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.317 [2024-11-17 16:05:05.669832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.317 [2024-11-17 16:05:05.669841] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.317 [2024-11-17 16:05:05.671095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3441135 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=192.168.100.8 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=192.168.100.8 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=96f1c2f5-ef57-4538-a779-bd0376e40e84 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=5f9413ae-e75f-41af-b7b2-fa1f2aa78f81 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a588e9ca-3155-4eae-9aef-8f1d706bf864 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.882 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:20.882 null0 00:28:20.882 null1 00:28:20.882 null2 00:28:20.882 [2024-11-17 16:05:06.359739] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:20.882 [2024-11-17 16:05:06.359825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441135 ] 00:28:20.882 [2024-11-17 16:05:06.368417] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029a40/0x7f1690d76940) succeed. 00:28:20.882 [2024-11-17 16:05:06.377493] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029bc0/0x7f1690d32940) succeed. 00:28:21.138 [2024-11-17 16:05:06.462247] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:21.138 [2024-11-17 16:05:06.489385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.138 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.138 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3441135 /var/tmp/tgt2.sock 00:28:21.138 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3441135 ']' 00:28:21.138 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:21.138 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.138 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:21.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:21.138 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.138 16:05:06 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:21.138 [2024-11-17 16:05:06.590064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.072 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.072 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:22.072 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:22.333 [2024-11-17 16:05:07.686830] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029740/0x7fe2c53bd940) succeed. 00:28:22.333 [2024-11-17 16:05:07.697735] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000298c0/0x7fe2c5379940) succeed. 00:28:22.333 [2024-11-17 16:05:07.773872] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:22.333 nvme0n1 nvme0n2 00:28:22.333 nvme1n1 00:28:22.333 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:22.333 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:22.333 16:05:07 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t rdma -a 192.168.100.8 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 96f1c2f5-ef57-4538-a779-bd0376e40e84 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=96f1c2f5ef574538a779bd0376e40e84 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 96F1C2F5EF574538A779BD0376E40E84 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 96F1C2F5EF574538A779BD0376E40E84 == \9\6\F\1\C\2\F\5\E\F\5\7\4\5\3\8\A\7\7\9\B\D\0\3\7\6\E\4\0\E\8\4 ]] 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 5f9413ae-e75f-41af-b7b2-fa1f2aa78f81 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5f9413aee75f41afb7b2fa1f2aa78f81 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5F9413AEE75F41AFB7B2FA1F2AA78F81 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 5F9413AEE75F41AFB7B2FA1F2AA78F81 == \5\F\9\4\1\3\A\E\E\7\5\F\4\1\A\F\B\7\B\2\F\A\1\F\2\A\A\7\8\F\8\1 ]] 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a588e9ca-3155-4eae-9aef-8f1d706bf864 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a588e9ca31554eae9aef8f1d706bf864 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A588E9CA31554EAE9AEF8F1D706BF864 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A588E9CA31554EAE9AEF8F1D706BF864 == \A\5\8\8\E\9\C\A\3\1\5\5\4\E\A\E\9\A\E\F\8\F\1\D\7\0\6\B\F\8\6\4 ]] 00:28:30.488 16:05:14 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:37.034 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:37.034 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:37.034 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3441135 00:28:37.034 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3441135 ']' 00:28:37.035 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3441135 00:28:37.035 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:37.035 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.035 16:05:21 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3441135 00:28:37.035 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:37.035 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:37.035 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3441135' 00:28:37.035 killing process with pid 3441135 00:28:37.035 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3441135 00:28:37.035 16:05:22 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3441135 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:38.928 rmmod nvme_rdma 00:28:38.928 rmmod nvme_fabrics 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3440894 ']' 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3440894 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3440894 ']' 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3440894 00:28:38.928 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:39.185 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:39.185 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3440894 00:28:39.185 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:39.185 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:39.185 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3440894' 00:28:39.185 killing process with pid 3440894 00:28:39.185 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3440894 00:28:39.185 16:05:24 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3440894 00:28:40.555 16:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.555 16:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:40.555 00:28:40.555 real 0m27.385s 00:28:40.555 user 0m39.979s 00:28:40.556 sys 0m6.859s 00:28:40.556 16:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.556 16:05:25 nvmf_rdma.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:40.556 ************************************ 00:28:40.556 END TEST nvmf_nsid 00:28:40.556 ************************************ 00:28:40.556 16:05:25 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:40.556 00:28:40.556 real 16m53.901s 00:28:40.556 user 51m23.797s 00:28:40.556 sys 3m22.397s 00:28:40.556 16:05:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.556 16:05:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:40.556 ************************************ 00:28:40.556 END TEST nvmf_target_extra 00:28:40.556 ************************************ 00:28:40.556 16:05:25 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:28:40.556 16:05:25 nvmf_rdma -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:40.556 16:05:25 nvmf_rdma -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.556 16:05:25 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:28:40.556 ************************************ 00:28:40.556 START TEST nvmf_host 00:28:40.556 ************************************ 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:28:40.556 * Looking for test storage... 00:28:40.556 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:40.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.556 --rc genhtml_branch_coverage=1 00:28:40.556 --rc genhtml_function_coverage=1 00:28:40.556 --rc genhtml_legend=1 00:28:40.556 --rc geninfo_all_blocks=1 00:28:40.556 --rc geninfo_unexecuted_blocks=1 00:28:40.556 00:28:40.556 ' 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:40.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.556 --rc genhtml_branch_coverage=1 00:28:40.556 --rc genhtml_function_coverage=1 00:28:40.556 --rc genhtml_legend=1 00:28:40.556 --rc geninfo_all_blocks=1 00:28:40.556 --rc geninfo_unexecuted_blocks=1 00:28:40.556 00:28:40.556 ' 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:40.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.556 --rc genhtml_branch_coverage=1 00:28:40.556 --rc genhtml_function_coverage=1 00:28:40.556 --rc genhtml_legend=1 00:28:40.556 --rc geninfo_all_blocks=1 00:28:40.556 --rc geninfo_unexecuted_blocks=1 00:28:40.556 00:28:40.556 ' 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:40.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.556 --rc genhtml_branch_coverage=1 00:28:40.556 --rc genhtml_function_coverage=1 00:28:40.556 --rc genhtml_legend=1 00:28:40.556 --rc geninfo_all_blocks=1 00:28:40.556 --rc geninfo_unexecuted_blocks=1 00:28:40.556 00:28:40.556 ' 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.556 16:05:25 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:40.556 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:40.556 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:40.557 16:05:26 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:40.557 16:05:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:40.557 16:05:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:40.557 16:05:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:40.557 16:05:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:28:40.557 16:05:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:40.557 16:05:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.557 16:05:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.557 ************************************ 00:28:40.557 START TEST nvmf_multicontroller 00:28:40.557 ************************************ 00:28:40.557 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:28:40.814 * Looking for test storage... 00:28:40.814 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:40.814 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:40.814 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:40.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.815 --rc genhtml_branch_coverage=1 00:28:40.815 --rc genhtml_function_coverage=1 00:28:40.815 --rc genhtml_legend=1 00:28:40.815 --rc geninfo_all_blocks=1 00:28:40.815 --rc geninfo_unexecuted_blocks=1 00:28:40.815 00:28:40.815 ' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:40.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.815 --rc genhtml_branch_coverage=1 00:28:40.815 --rc genhtml_function_coverage=1 00:28:40.815 --rc genhtml_legend=1 00:28:40.815 --rc geninfo_all_blocks=1 00:28:40.815 --rc geninfo_unexecuted_blocks=1 00:28:40.815 00:28:40.815 ' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:40.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.815 --rc genhtml_branch_coverage=1 00:28:40.815 --rc genhtml_function_coverage=1 00:28:40.815 --rc genhtml_legend=1 00:28:40.815 --rc geninfo_all_blocks=1 00:28:40.815 --rc geninfo_unexecuted_blocks=1 00:28:40.815 00:28:40.815 ' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:40.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:40.815 --rc genhtml_branch_coverage=1 00:28:40.815 --rc genhtml_function_coverage=1 00:28:40.815 --rc genhtml_legend=1 00:28:40.815 --rc geninfo_all_blocks=1 00:28:40.815 --rc geninfo_unexecuted_blocks=1 00:28:40.815 00:28:40.815 ' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:40.815 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:40.815 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:40.816 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:28:40.816 00:28:40.816 real 0m0.219s 00:28:40.816 user 0m0.133s 00:28:40.816 sys 0m0.103s 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:40.816 ************************************ 00:28:40.816 END TEST nvmf_multicontroller 00:28:40.816 ************************************ 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.816 ************************************ 00:28:40.816 START TEST nvmf_aer 00:28:40.816 ************************************ 00:28:40.816 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:28:41.074 * Looking for test storage... 00:28:41.074 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:41.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.074 --rc genhtml_branch_coverage=1 00:28:41.074 --rc genhtml_function_coverage=1 00:28:41.074 --rc genhtml_legend=1 00:28:41.074 --rc geninfo_all_blocks=1 00:28:41.074 --rc geninfo_unexecuted_blocks=1 00:28:41.074 00:28:41.074 ' 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:41.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.074 --rc genhtml_branch_coverage=1 00:28:41.074 --rc genhtml_function_coverage=1 00:28:41.074 --rc genhtml_legend=1 00:28:41.074 --rc geninfo_all_blocks=1 00:28:41.074 --rc geninfo_unexecuted_blocks=1 00:28:41.074 00:28:41.074 ' 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:41.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.074 --rc genhtml_branch_coverage=1 00:28:41.074 --rc genhtml_function_coverage=1 00:28:41.074 --rc genhtml_legend=1 00:28:41.074 --rc geninfo_all_blocks=1 00:28:41.074 --rc geninfo_unexecuted_blocks=1 00:28:41.074 00:28:41.074 ' 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:41.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:41.074 --rc genhtml_branch_coverage=1 00:28:41.074 --rc genhtml_function_coverage=1 00:28:41.074 --rc genhtml_legend=1 00:28:41.074 --rc geninfo_all_blocks=1 00:28:41.074 --rc geninfo_unexecuted_blocks=1 00:28:41.074 00:28:41.074 ' 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:41.074 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:41.075 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:28:41.075 16:05:26 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:28:47.625 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:47.626 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:47.626 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:47.626 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:47.626 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # rdma_device_init 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:47.626 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:47.626 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:47.626 altname enp217s0f0np0 00:28:47.626 altname ens818f0np0 00:28:47.626 inet 192.168.100.8/24 scope global mlx_0_0 00:28:47.626 valid_lft forever preferred_lft forever 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:47.626 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:47.626 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:47.626 altname enp217s0f1np1 00:28:47.626 altname ens818f1np1 00:28:47.626 inet 192.168.100.9/24 scope global mlx_0_1 00:28:47.626 valid_lft forever preferred_lft forever 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:28:47.626 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:47.627 192.168.100.9' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:47.627 192.168.100.9' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # head -n 1 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:47.627 192.168.100.9' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # tail -n +2 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # head -n 1 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3447776 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3447776 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3447776 ']' 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:47.627 16:05:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:47.627 [2024-11-17 16:05:32.975822] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:47.627 [2024-11-17 16:05:32.975913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.627 [2024-11-17 16:05:33.109730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.885 [2024-11-17 16:05:33.211074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.885 [2024-11-17 16:05:33.211120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.885 [2024-11-17 16:05:33.211133] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.885 [2024-11-17 16:05:33.211145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.885 [2024-11-17 16:05:33.211156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.885 [2024-11-17 16:05:33.213795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.885 [2024-11-17 16:05:33.213815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.885 [2024-11-17 16:05:33.213903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:47.885 [2024-11-17 16:05:33.213910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.450 16:05:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.450 16:05:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:28:48.450 16:05:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.450 16:05:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:48.450 16:05:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.450 16:05:33 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.450 16:05:33 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:48.450 16:05:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.450 16:05:33 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.450 [2024-11-17 16:05:33.854269] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f5c257bd940) succeed. 00:28:48.450 [2024-11-17 16:05:33.863752] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f5c25779940) succeed. 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.708 Malloc0 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.708 [2024-11-17 16:05:34.239257] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.708 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.966 [ 00:28:48.966 { 00:28:48.966 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:48.966 "subtype": "Discovery", 00:28:48.966 "listen_addresses": [], 00:28:48.966 "allow_any_host": true, 00:28:48.966 "hosts": [] 00:28:48.966 }, 00:28:48.966 { 00:28:48.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.966 "subtype": "NVMe", 00:28:48.966 "listen_addresses": [ 00:28:48.966 { 00:28:48.966 "trtype": "RDMA", 00:28:48.966 "adrfam": "IPv4", 00:28:48.966 "traddr": "192.168.100.8", 00:28:48.966 "trsvcid": "4420" 00:28:48.966 } 00:28:48.966 ], 00:28:48.966 "allow_any_host": true, 00:28:48.966 "hosts": [], 00:28:48.966 "serial_number": "SPDK00000000000001", 00:28:48.966 "model_number": "SPDK bdev Controller", 00:28:48.966 "max_namespaces": 2, 00:28:48.966 "min_cntlid": 1, 00:28:48.966 "max_cntlid": 65519, 00:28:48.966 "namespaces": [ 00:28:48.966 { 00:28:48.966 "nsid": 1, 00:28:48.966 "bdev_name": "Malloc0", 00:28:48.966 "name": "Malloc0", 00:28:48.966 "nguid": "E77E1708D38E44E596B5EC29B32BCBB6", 00:28:48.966 "uuid": "e77e1708-d38e-44e5-96b5-ec29b32bcbb6" 00:28:48.966 } 00:28:48.966 ] 00:28:48.966 } 00:28:48.966 ] 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3447970 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:28:48.966 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:49.224 Malloc1 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:49.224 [ 00:28:49.224 { 00:28:49.224 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:49.224 "subtype": "Discovery", 00:28:49.224 "listen_addresses": [], 00:28:49.224 "allow_any_host": true, 00:28:49.224 "hosts": [] 00:28:49.224 }, 00:28:49.224 { 00:28:49.224 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:49.224 "subtype": "NVMe", 00:28:49.224 "listen_addresses": [ 00:28:49.224 { 00:28:49.224 "trtype": "RDMA", 00:28:49.224 "adrfam": "IPv4", 00:28:49.224 "traddr": "192.168.100.8", 00:28:49.224 "trsvcid": "4420" 00:28:49.224 } 00:28:49.224 ], 00:28:49.224 "allow_any_host": true, 00:28:49.224 "hosts": [], 00:28:49.224 "serial_number": "SPDK00000000000001", 00:28:49.224 "model_number": "SPDK bdev Controller", 00:28:49.224 "max_namespaces": 2, 00:28:49.224 "min_cntlid": 1, 00:28:49.224 "max_cntlid": 65519, 00:28:49.224 "namespaces": [ 00:28:49.224 { 00:28:49.224 "nsid": 1, 00:28:49.224 "bdev_name": "Malloc0", 00:28:49.224 "name": "Malloc0", 00:28:49.224 "nguid": "E77E1708D38E44E596B5EC29B32BCBB6", 00:28:49.224 "uuid": "e77e1708-d38e-44e5-96b5-ec29b32bcbb6" 00:28:49.224 }, 00:28:49.224 { 00:28:49.224 "nsid": 2, 00:28:49.224 "bdev_name": "Malloc1", 00:28:49.224 "name": "Malloc1", 00:28:49.224 "nguid": "BA90FDC82F124F1FBF99E85BF72D6950", 00:28:49.224 "uuid": "ba90fdc8-2f12-4f1f-bf99-e85bf72d6950" 00:28:49.224 } 00:28:49.224 ] 00:28:49.224 } 00:28:49.224 ] 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.224 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3447970 00:28:49.481 Asynchronous Event Request test 00:28:49.481 Attaching to 192.168.100.8 00:28:49.481 Attached to 192.168.100.8 00:28:49.481 Registering asynchronous event callbacks... 00:28:49.481 Starting namespace attribute notice tests for all controllers... 00:28:49.481 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:49.481 aer_cb - Changed Namespace 00:28:49.481 Cleaning up... 00:28:49.481 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:49.482 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.482 16:05:34 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:28:49.739 rmmod nvme_rdma 00:28:49.739 rmmod nvme_fabrics 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3447776 ']' 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3447776 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3447776 ']' 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3447776 00:28:49.739 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:28:49.996 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.996 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3447776 00:28:49.997 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:49.997 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:49.997 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3447776' 00:28:49.997 killing process with pid 3447776 00:28:49.997 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3447776 00:28:49.997 16:05:35 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3447776 00:28:51.895 16:05:36 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:51.895 16:05:36 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:28:51.895 00:28:51.895 real 0m10.624s 00:28:51.895 user 0m15.062s 00:28:51.895 sys 0m5.666s 00:28:51.895 16:05:36 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.895 16:05:36 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:51.895 ************************************ 00:28:51.895 END TEST nvmf_aer 00:28:51.895 ************************************ 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.895 ************************************ 00:28:51.895 START TEST nvmf_async_init 00:28:51.895 ************************************ 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:28:51.895 * Looking for test storage... 00:28:51.895 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:51.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.895 --rc genhtml_branch_coverage=1 00:28:51.895 --rc genhtml_function_coverage=1 00:28:51.895 --rc genhtml_legend=1 00:28:51.895 --rc geninfo_all_blocks=1 00:28:51.895 --rc geninfo_unexecuted_blocks=1 00:28:51.895 00:28:51.895 ' 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:51.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.895 --rc genhtml_branch_coverage=1 00:28:51.895 --rc genhtml_function_coverage=1 00:28:51.895 --rc genhtml_legend=1 00:28:51.895 --rc geninfo_all_blocks=1 00:28:51.895 --rc geninfo_unexecuted_blocks=1 00:28:51.895 00:28:51.895 ' 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:51.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.895 --rc genhtml_branch_coverage=1 00:28:51.895 --rc genhtml_function_coverage=1 00:28:51.895 --rc genhtml_legend=1 00:28:51.895 --rc geninfo_all_blocks=1 00:28:51.895 --rc geninfo_unexecuted_blocks=1 00:28:51.895 00:28:51.895 ' 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:51.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.895 --rc genhtml_branch_coverage=1 00:28:51.895 --rc genhtml_function_coverage=1 00:28:51.895 --rc genhtml_legend=1 00:28:51.895 --rc geninfo_all_blocks=1 00:28:51.895 --rc geninfo_unexecuted_blocks=1 00:28:51.895 00:28:51.895 ' 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.895 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:51.896 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dfc2a24ba5cf482ea7b3f1de5b7042bf 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.896 16:05:37 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:28:58.450 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:58.451 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:58.451 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:58.451 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:58.451 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # rdma_device_init 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@530 -- # allocate_nic_ips 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:28:58.451 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:58.451 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:58.451 altname enp217s0f0np0 00:28:58.451 altname ens818f0np0 00:28:58.451 inet 192.168.100.8/24 scope global mlx_0_0 00:28:58.451 valid_lft forever preferred_lft forever 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:28:58.451 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:58.451 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:58.451 altname enp217s0f1np1 00:28:58.451 altname ens818f1np1 00:28:58.451 inet 192.168.100.9/24 scope global mlx_0_1 00:28:58.451 valid_lft forever preferred_lft forever 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:28:58.451 192.168.100.9' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:28:58.451 192.168.100.9' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # head -n 1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:28:58.451 192.168.100.9' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # tail -n +2 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # head -n 1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3451668 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3451668 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3451668 ']' 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.451 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.452 16:05:43 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:58.709 [2024-11-17 16:05:44.008902] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:58.709 [2024-11-17 16:05:44.008993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.710 [2024-11-17 16:05:44.144112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.710 [2024-11-17 16:05:44.238006] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.710 [2024-11-17 16:05:44.238055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.710 [2024-11-17 16:05:44.238068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.710 [2024-11-17 16:05:44.238080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.710 [2024-11-17 16:05:44.238090] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.710 [2024-11-17 16:05:44.239357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.275 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.275 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:28:59.275 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.275 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.275 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.534 [2024-11-17 16:05:44.878888] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028840/0x7f5d66d7d940) succeed. 00:28:59.534 [2024-11-17 16:05:44.887666] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000289c0/0x7f5d66d38940) succeed. 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.534 null0 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dfc2a24ba5cf482ea7b3f1de5b7042bf 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.534 16:05:44 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.534 [2024-11-17 16:05:44.999141] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:59.534 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.534 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:59.534 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.534 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.792 nvme0n1 00:28:59.792 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.792 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:59.792 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.792 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.792 [ 00:28:59.792 { 00:28:59.792 "name": "nvme0n1", 00:28:59.792 "aliases": [ 00:28:59.792 "dfc2a24b-a5cf-482e-a7b3-f1de5b7042bf" 00:28:59.792 ], 00:28:59.792 "product_name": "NVMe disk", 00:28:59.792 "block_size": 512, 00:28:59.792 "num_blocks": 2097152, 00:28:59.792 "uuid": "dfc2a24b-a5cf-482e-a7b3-f1de5b7042bf", 00:28:59.792 "numa_id": 1, 00:28:59.792 "assigned_rate_limits": { 00:28:59.792 "rw_ios_per_sec": 0, 00:28:59.792 "rw_mbytes_per_sec": 0, 00:28:59.792 "r_mbytes_per_sec": 0, 00:28:59.792 "w_mbytes_per_sec": 0 00:28:59.792 }, 00:28:59.792 "claimed": false, 00:28:59.792 "zoned": false, 00:28:59.792 "supported_io_types": { 00:28:59.793 "read": true, 00:28:59.793 "write": true, 00:28:59.793 "unmap": false, 00:28:59.793 "flush": true, 00:28:59.793 "reset": true, 00:28:59.793 "nvme_admin": true, 00:28:59.793 "nvme_io": true, 00:28:59.793 "nvme_io_md": false, 00:28:59.793 "write_zeroes": true, 00:28:59.793 "zcopy": false, 00:28:59.793 "get_zone_info": false, 00:28:59.793 "zone_management": false, 00:28:59.793 "zone_append": false, 00:28:59.793 "compare": true, 00:28:59.793 "compare_and_write": true, 00:28:59.793 "abort": true, 00:28:59.793 "seek_hole": false, 00:28:59.793 "seek_data": false, 00:28:59.793 "copy": true, 00:28:59.793 "nvme_iov_md": false 00:28:59.793 }, 00:28:59.793 "memory_domains": [ 00:28:59.793 { 00:28:59.793 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:28:59.793 "dma_device_type": 0 00:28:59.793 } 00:28:59.793 ], 00:28:59.793 "driver_specific": { 00:28:59.793 "nvme": [ 00:28:59.793 { 00:28:59.793 "trid": { 00:28:59.793 "trtype": "RDMA", 00:28:59.793 "adrfam": "IPv4", 00:28:59.793 "traddr": "192.168.100.8", 00:28:59.793 "trsvcid": "4420", 00:28:59.793 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:59.793 }, 00:28:59.793 "ctrlr_data": { 00:28:59.793 "cntlid": 1, 00:28:59.793 "vendor_id": "0x8086", 00:28:59.793 "model_number": "SPDK bdev Controller", 00:28:59.793 "serial_number": "00000000000000000000", 00:28:59.793 "firmware_revision": "25.01", 00:28:59.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.793 "oacs": { 00:28:59.793 "security": 0, 00:28:59.793 "format": 0, 00:28:59.793 "firmware": 0, 00:28:59.793 "ns_manage": 0 00:28:59.793 }, 00:28:59.793 "multi_ctrlr": true, 00:28:59.793 "ana_reporting": false 00:28:59.793 }, 00:28:59.793 "vs": { 00:28:59.793 "nvme_version": "1.3" 00:28:59.793 }, 00:28:59.793 "ns_data": { 00:28:59.793 "id": 1, 00:28:59.793 "can_share": true 00:28:59.793 } 00:28:59.793 } 00:28:59.793 ], 00:28:59.793 "mp_policy": "active_passive" 00:28:59.793 } 00:28:59.793 } 00:28:59.793 ] 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.793 [2024-11-17 16:05:45.125049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:59.793 [2024-11-17 16:05:45.158199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:28:59.793 [2024-11-17 16:05:45.180437] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.793 [ 00:28:59.793 { 00:28:59.793 "name": "nvme0n1", 00:28:59.793 "aliases": [ 00:28:59.793 "dfc2a24b-a5cf-482e-a7b3-f1de5b7042bf" 00:28:59.793 ], 00:28:59.793 "product_name": "NVMe disk", 00:28:59.793 "block_size": 512, 00:28:59.793 "num_blocks": 2097152, 00:28:59.793 "uuid": "dfc2a24b-a5cf-482e-a7b3-f1de5b7042bf", 00:28:59.793 "numa_id": 1, 00:28:59.793 "assigned_rate_limits": { 00:28:59.793 "rw_ios_per_sec": 0, 00:28:59.793 "rw_mbytes_per_sec": 0, 00:28:59.793 "r_mbytes_per_sec": 0, 00:28:59.793 "w_mbytes_per_sec": 0 00:28:59.793 }, 00:28:59.793 "claimed": false, 00:28:59.793 "zoned": false, 00:28:59.793 "supported_io_types": { 00:28:59.793 "read": true, 00:28:59.793 "write": true, 00:28:59.793 "unmap": false, 00:28:59.793 "flush": true, 00:28:59.793 "reset": true, 00:28:59.793 "nvme_admin": true, 00:28:59.793 "nvme_io": true, 00:28:59.793 "nvme_io_md": false, 00:28:59.793 "write_zeroes": true, 00:28:59.793 "zcopy": false, 00:28:59.793 "get_zone_info": false, 00:28:59.793 "zone_management": false, 00:28:59.793 "zone_append": false, 00:28:59.793 "compare": true, 00:28:59.793 "compare_and_write": true, 00:28:59.793 "abort": true, 00:28:59.793 "seek_hole": false, 00:28:59.793 "seek_data": false, 00:28:59.793 "copy": true, 00:28:59.793 "nvme_iov_md": false 00:28:59.793 }, 00:28:59.793 "memory_domains": [ 00:28:59.793 { 00:28:59.793 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:28:59.793 "dma_device_type": 0 00:28:59.793 } 00:28:59.793 ], 00:28:59.793 "driver_specific": { 00:28:59.793 "nvme": [ 00:28:59.793 { 00:28:59.793 "trid": { 00:28:59.793 "trtype": "RDMA", 00:28:59.793 "adrfam": "IPv4", 00:28:59.793 "traddr": "192.168.100.8", 00:28:59.793 "trsvcid": "4420", 00:28:59.793 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:59.793 }, 00:28:59.793 "ctrlr_data": { 00:28:59.793 "cntlid": 2, 00:28:59.793 "vendor_id": "0x8086", 00:28:59.793 "model_number": "SPDK bdev Controller", 00:28:59.793 "serial_number": "00000000000000000000", 00:28:59.793 "firmware_revision": "25.01", 00:28:59.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.793 "oacs": { 00:28:59.793 "security": 0, 00:28:59.793 "format": 0, 00:28:59.793 "firmware": 0, 00:28:59.793 "ns_manage": 0 00:28:59.793 }, 00:28:59.793 "multi_ctrlr": true, 00:28:59.793 "ana_reporting": false 00:28:59.793 }, 00:28:59.793 "vs": { 00:28:59.793 "nvme_version": "1.3" 00:28:59.793 }, 00:28:59.793 "ns_data": { 00:28:59.793 "id": 1, 00:28:59.793 "can_share": true 00:28:59.793 } 00:28:59.793 } 00:28:59.793 ], 00:28:59.793 "mp_policy": "active_passive" 00:28:59.793 } 00:28:59.793 } 00:28:59.793 ] 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.dk4TUDBTcm 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.dk4TUDBTcm 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.dk4TUDBTcm 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.793 [2024-11-17 16:05:45.287872] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.793 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.793 [2024-11-17 16:05:45.307906] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:00.052 nvme0n1 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.052 [ 00:29:00.052 { 00:29:00.052 "name": "nvme0n1", 00:29:00.052 "aliases": [ 00:29:00.052 "dfc2a24b-a5cf-482e-a7b3-f1de5b7042bf" 00:29:00.052 ], 00:29:00.052 "product_name": "NVMe disk", 00:29:00.052 "block_size": 512, 00:29:00.052 "num_blocks": 2097152, 00:29:00.052 "uuid": "dfc2a24b-a5cf-482e-a7b3-f1de5b7042bf", 00:29:00.052 "numa_id": 1, 00:29:00.052 "assigned_rate_limits": { 00:29:00.052 "rw_ios_per_sec": 0, 00:29:00.052 "rw_mbytes_per_sec": 0, 00:29:00.052 "r_mbytes_per_sec": 0, 00:29:00.052 "w_mbytes_per_sec": 0 00:29:00.052 }, 00:29:00.052 "claimed": false, 00:29:00.052 "zoned": false, 00:29:00.052 "supported_io_types": { 00:29:00.052 "read": true, 00:29:00.052 "write": true, 00:29:00.052 "unmap": false, 00:29:00.052 "flush": true, 00:29:00.052 "reset": true, 00:29:00.052 "nvme_admin": true, 00:29:00.052 "nvme_io": true, 00:29:00.052 "nvme_io_md": false, 00:29:00.052 "write_zeroes": true, 00:29:00.052 "zcopy": false, 00:29:00.052 "get_zone_info": false, 00:29:00.052 "zone_management": false, 00:29:00.052 "zone_append": false, 00:29:00.052 "compare": true, 00:29:00.052 "compare_and_write": true, 00:29:00.052 "abort": true, 00:29:00.052 "seek_hole": false, 00:29:00.052 "seek_data": false, 00:29:00.052 "copy": true, 00:29:00.052 "nvme_iov_md": false 00:29:00.052 }, 00:29:00.052 "memory_domains": [ 00:29:00.052 { 00:29:00.052 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:29:00.052 "dma_device_type": 0 00:29:00.052 } 00:29:00.052 ], 00:29:00.052 "driver_specific": { 00:29:00.052 "nvme": [ 00:29:00.052 { 00:29:00.052 "trid": { 00:29:00.052 "trtype": "RDMA", 00:29:00.052 "adrfam": "IPv4", 00:29:00.052 "traddr": "192.168.100.8", 00:29:00.052 "trsvcid": "4421", 00:29:00.052 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:00.052 }, 00:29:00.052 "ctrlr_data": { 00:29:00.052 "cntlid": 3, 00:29:00.052 "vendor_id": "0x8086", 00:29:00.052 "model_number": "SPDK bdev Controller", 00:29:00.052 "serial_number": "00000000000000000000", 00:29:00.052 "firmware_revision": "25.01", 00:29:00.052 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.052 "oacs": { 00:29:00.052 "security": 0, 00:29:00.052 "format": 0, 00:29:00.052 "firmware": 0, 00:29:00.052 "ns_manage": 0 00:29:00.052 }, 00:29:00.052 "multi_ctrlr": true, 00:29:00.052 "ana_reporting": false 00:29:00.052 }, 00:29:00.052 "vs": { 00:29:00.052 "nvme_version": "1.3" 00:29:00.052 }, 00:29:00.052 "ns_data": { 00:29:00.052 "id": 1, 00:29:00.052 "can_share": true 00:29:00.052 } 00:29:00.052 } 00:29:00.052 ], 00:29:00.052 "mp_policy": "active_passive" 00:29:00.052 } 00:29:00.052 } 00:29:00.052 ] 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.dk4TUDBTcm 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:00.052 rmmod nvme_rdma 00:29:00.052 rmmod nvme_fabrics 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3451668 ']' 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3451668 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3451668 ']' 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3451668 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3451668 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3451668' 00:29:00.052 killing process with pid 3451668 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3451668 00:29:00.052 16:05:45 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3451668 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:01.431 00:29:01.431 real 0m9.489s 00:29:01.431 user 0m4.574s 00:29:01.431 sys 0m5.672s 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:01.431 ************************************ 00:29:01.431 END TEST nvmf_async_init 00:29:01.431 ************************************ 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.431 ************************************ 00:29:01.431 START TEST dma 00:29:01.431 ************************************ 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:29:01.431 * Looking for test storage... 00:29:01.431 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:29:01.431 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:01.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.432 --rc genhtml_branch_coverage=1 00:29:01.432 --rc genhtml_function_coverage=1 00:29:01.432 --rc genhtml_legend=1 00:29:01.432 --rc geninfo_all_blocks=1 00:29:01.432 --rc geninfo_unexecuted_blocks=1 00:29:01.432 00:29:01.432 ' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:01.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.432 --rc genhtml_branch_coverage=1 00:29:01.432 --rc genhtml_function_coverage=1 00:29:01.432 --rc genhtml_legend=1 00:29:01.432 --rc geninfo_all_blocks=1 00:29:01.432 --rc geninfo_unexecuted_blocks=1 00:29:01.432 00:29:01.432 ' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:01.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.432 --rc genhtml_branch_coverage=1 00:29:01.432 --rc genhtml_function_coverage=1 00:29:01.432 --rc genhtml_legend=1 00:29:01.432 --rc geninfo_all_blocks=1 00:29:01.432 --rc geninfo_unexecuted_blocks=1 00:29:01.432 00:29:01.432 ' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:01.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:01.432 --rc genhtml_branch_coverage=1 00:29:01.432 --rc genhtml_function_coverage=1 00:29:01.432 --rc genhtml_legend=1 00:29:01.432 --rc geninfo_all_blocks=1 00:29:01.432 --rc geninfo_unexecuted_blocks=1 00:29:01.432 00:29:01.432 ' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:01.432 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:01.432 16:05:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:29:01.433 16:05:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:07.989 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:07.989 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:07.989 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.989 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:07.990 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # rdma_device_init 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:07.990 16:05:52 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:07.990 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:07.990 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:07.990 altname enp217s0f0np0 00:29:07.990 altname ens818f0np0 00:29:07.990 inet 192.168.100.8/24 scope global mlx_0_0 00:29:07.990 valid_lft forever preferred_lft forever 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:07.990 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:07.990 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:07.990 altname enp217s0f1np1 00:29:07.990 altname ens818f1np1 00:29:07.990 inet 192.168.100.9/24 scope global mlx_0_1 00:29:07.990 valid_lft forever preferred_lft forever 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # return 0 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:07.990 192.168.100.9' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:07.990 192.168.100.9' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # head -n 1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:07.990 192.168.100.9' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # tail -n +2 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # head -n 1 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # nvmfpid=3455378 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # waitforlisten 3455378 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # '[' -z 3455378 ']' 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.990 16:05:53 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.991 16:05:53 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.991 16:05:53 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:07.991 16:05:53 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:07.991 [2024-11-17 16:05:53.263386] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:07.991 [2024-11-17 16:05:53.263479] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.991 [2024-11-17 16:05:53.396469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:07.991 [2024-11-17 16:05:53.490678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.991 [2024-11-17 16:05:53.490726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.991 [2024-11-17 16:05:53.490738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.991 [2024-11-17 16:05:53.490751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.991 [2024-11-17 16:05:53.490760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.991 [2024-11-17 16:05:53.492905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.991 [2024-11-17 16:05:53.492913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.556 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.556 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@868 -- # return 0 00:29:08.556 16:05:54 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.556 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.556 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:08.556 16:05:54 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.556 16:05:54 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:08.556 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.814 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:08.814 [2024-11-17 16:05:54.126514] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f406f5bd940) succeed. 00:29:08.814 [2024-11-17 16:05:54.135791] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f406f579940) succeed. 00:29:08.814 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.814 16:05:54 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:29:08.814 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.814 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:09.072 Malloc0 00:29:09.072 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.072 16:05:54 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:29:09.072 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.072 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:09.072 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.072 16:05:54 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:29:09.072 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.072 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:09.072 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.072 16:05:54 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:09.073 [2024-11-17 16:05:54.544372] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # config=() 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # local subsystem config 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:09.073 { 00:29:09.073 "params": { 00:29:09.073 "name": "Nvme$subsystem", 00:29:09.073 "trtype": "$TEST_TRANSPORT", 00:29:09.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:09.073 "adrfam": "ipv4", 00:29:09.073 "trsvcid": "$NVMF_PORT", 00:29:09.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:09.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:09.073 "hdgst": ${hdgst:-false}, 00:29:09.073 "ddgst": ${ddgst:-false} 00:29:09.073 }, 00:29:09.073 "method": "bdev_nvme_attach_controller" 00:29:09.073 } 00:29:09.073 EOF 00:29:09.073 )") 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # cat 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # jq . 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@585 -- # IFS=, 00:29:09.073 16:05:54 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:09.073 "params": { 00:29:09.073 "name": "Nvme0", 00:29:09.073 "trtype": "rdma", 00:29:09.073 "traddr": "192.168.100.8", 00:29:09.073 "adrfam": "ipv4", 00:29:09.073 "trsvcid": "4420", 00:29:09.073 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.073 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:09.073 "hdgst": false, 00:29:09.073 "ddgst": false 00:29:09.073 }, 00:29:09.073 "method": "bdev_nvme_attach_controller" 00:29:09.073 }' 00:29:09.331 [2024-11-17 16:05:54.627513] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:09.331 [2024-11-17 16:05:54.627624] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455662 ] 00:29:09.331 [2024-11-17 16:05:54.754072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:09.331 [2024-11-17 16:05:54.857043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:09.331 [2024-11-17 16:05:54.857052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.896 bdev Nvme0n1 reports 1 memory domains 00:29:15.896 bdev Nvme0n1 supports RDMA memory domain 00:29:15.896 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:15.896 ========================================================================== 00:29:15.896 Latency [us] 00:29:15.896 IOPS MiB/s Average min max 00:29:15.896 Core 2: 19599.40 76.56 815.67 311.20 12719.71 00:29:15.896 Core 3: 19541.61 76.33 818.06 281.63 12932.19 00:29:15.896 ========================================================================== 00:29:15.896 Total : 39141.01 152.89 816.87 281.63 12932.19 00:29:15.896 00:29:15.896 Total operations: 195741, translate 195741 pull_push 0 memzero 0 00:29:15.896 16:06:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:29:15.896 16:06:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:29:15.896 16:06:01 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:29:15.896 [2024-11-17 16:06:01.220981] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:15.896 [2024-11-17 16:06:01.221077] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456801 ] 00:29:15.896 [2024-11-17 16:06:01.352112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:16.155 [2024-11-17 16:06:01.455705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.155 [2024-11-17 16:06:01.455710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:22.724 bdev Malloc0 reports 2 memory domains 00:29:22.724 bdev Malloc0 doesn't support RDMA memory domain 00:29:22.724 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:22.724 ========================================================================== 00:29:22.724 Latency [us] 00:29:22.724 IOPS MiB/s Average min max 00:29:22.724 Core 2: 12502.78 48.84 1278.83 414.86 1711.16 00:29:22.724 Core 3: 12820.65 50.08 1247.08 485.18 1620.56 00:29:22.724 ========================================================================== 00:29:22.724 Total : 25323.43 98.92 1262.76 414.86 1711.16 00:29:22.724 00:29:22.724 Total operations: 126666, translate 0 pull_push 506664 memzero 0 00:29:22.724 16:06:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:29:22.724 16:06:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:29:22.724 16:06:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:29:22.724 16:06:08 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:29:22.724 Ignoring -M option 00:29:22.724 [2024-11-17 16:06:08.168046] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:22.724 [2024-11-17 16:06:08.168137] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458339 ] 00:29:22.984 [2024-11-17 16:06:08.298245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:22.984 [2024-11-17 16:06:08.402200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:22.984 [2024-11-17 16:06:08.402207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:29.654 bdev bdc12fc3-f183-453d-be0f-e0d79413308c reports 1 memory domains 00:29:29.654 bdev bdc12fc3-f183-453d-be0f-e0d79413308c supports RDMA memory domain 00:29:29.655 Initialization complete, running randread IO for 5 sec on 2 cores 00:29:29.655 ========================================================================== 00:29:29.655 Latency [us] 00:29:29.655 IOPS MiB/s Average min max 00:29:29.655 Core 2: 63402.48 247.67 251.42 79.52 2231.84 00:29:29.655 Core 3: 65260.62 254.92 244.25 80.28 2516.68 00:29:29.655 ========================================================================== 00:29:29.655 Total : 128663.10 502.59 247.78 79.52 2516.68 00:29:29.655 00:29:29.655 Total operations: 643404, translate 0 pull_push 0 memzero 643404 00:29:29.655 16:06:14 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:29:29.655 [2024-11-17 16:06:14.941766] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:32.191 Initializing NVMe Controllers 00:29:32.191 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:29:32.191 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:32.191 Initialization complete. Launching workers. 00:29:32.191 ======================================================== 00:29:32.191 Latency(us) 00:29:32.191 Device Information : IOPS MiB/s Average min max 00:29:32.191 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2024.67 7.91 7957.17 6001.18 9959.76 00:29:32.191 ======================================================== 00:29:32.191 Total : 2024.67 7.91 7957.17 6001.18 9959.76 00:29:32.191 00:29:32.191 16:06:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:29:32.191 16:06:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:29:32.191 16:06:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:29:32.191 16:06:17 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:29:32.191 [2024-11-17 16:06:17.433203] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:32.191 [2024-11-17 16:06:17.433316] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459935 ] 00:29:32.191 [2024-11-17 16:06:17.561393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:32.191 [2024-11-17 16:06:17.665677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.191 [2024-11-17 16:06:17.665683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.761 bdev 88ae670c-eb27-423d-a66e-29bd354997c3 reports 1 memory domains 00:29:38.761 bdev 88ae670c-eb27-423d-a66e-29bd354997c3 supports RDMA memory domain 00:29:38.761 Initialization complete, running randrw IO for 5 sec on 2 cores 00:29:38.761 ========================================================================== 00:29:38.761 Latency [us] 00:29:38.762 IOPS MiB/s Average min max 00:29:38.762 Core 2: 16980.95 66.33 941.44 20.46 8927.88 00:29:38.762 Core 3: 17337.44 67.72 922.08 11.86 7601.58 00:29:38.762 ========================================================================== 00:29:38.762 Total : 34318.39 134.06 931.66 11.86 8927.88 00:29:38.762 00:29:38.762 Total operations: 171647, translate 171509 pull_push 0 memzero 138 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:38.762 rmmod nvme_rdma 00:29:38.762 rmmod nvme_fabrics 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@517 -- # '[' -n 3455378 ']' 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # killprocess 3455378 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # '[' -z 3455378 ']' 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@958 -- # kill -0 3455378 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # uname 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3455378 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3455378' 00:29:38.762 killing process with pid 3455378 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@973 -- # kill 3455378 00:29:38.762 16:06:24 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@978 -- # wait 3455378 00:29:40.669 16:06:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:40.669 16:06:26 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:40.669 00:29:40.669 real 0m39.534s 00:29:40.669 user 1m56.519s 00:29:40.669 sys 0m6.971s 00:29:40.669 16:06:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.669 16:06:26 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:40.669 ************************************ 00:29:40.669 END TEST dma 00:29:40.669 ************************************ 00:29:40.669 16:06:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:29:40.669 16:06:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:40.669 16:06:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.669 16:06:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.928 ************************************ 00:29:40.928 START TEST nvmf_identify 00:29:40.928 ************************************ 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:29:40.928 * Looking for test storage... 00:29:40.928 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:40.928 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:40.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.929 --rc genhtml_branch_coverage=1 00:29:40.929 --rc genhtml_function_coverage=1 00:29:40.929 --rc genhtml_legend=1 00:29:40.929 --rc geninfo_all_blocks=1 00:29:40.929 --rc geninfo_unexecuted_blocks=1 00:29:40.929 00:29:40.929 ' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:40.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.929 --rc genhtml_branch_coverage=1 00:29:40.929 --rc genhtml_function_coverage=1 00:29:40.929 --rc genhtml_legend=1 00:29:40.929 --rc geninfo_all_blocks=1 00:29:40.929 --rc geninfo_unexecuted_blocks=1 00:29:40.929 00:29:40.929 ' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:40.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.929 --rc genhtml_branch_coverage=1 00:29:40.929 --rc genhtml_function_coverage=1 00:29:40.929 --rc genhtml_legend=1 00:29:40.929 --rc geninfo_all_blocks=1 00:29:40.929 --rc geninfo_unexecuted_blocks=1 00:29:40.929 00:29:40.929 ' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:40.929 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.929 --rc genhtml_branch_coverage=1 00:29:40.929 --rc genhtml_function_coverage=1 00:29:40.929 --rc genhtml_legend=1 00:29:40.929 --rc geninfo_all_blocks=1 00:29:40.929 --rc geninfo_unexecuted_blocks=1 00:29:40.929 00:29:40.929 ' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:40.929 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:40.929 16:06:26 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:47.505 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:47.505 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:47.505 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:47.505 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # rdma_device_init 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:47.505 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:47.506 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:47.506 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:47.506 altname enp217s0f0np0 00:29:47.506 altname ens818f0np0 00:29:47.506 inet 192.168.100.8/24 scope global mlx_0_0 00:29:47.506 valid_lft forever preferred_lft forever 00:29:47.506 16:06:32 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:47.506 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:47.506 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:47.506 altname enp217s0f1np1 00:29:47.506 altname ens818f1np1 00:29:47.506 inet 192.168.100.9/24 scope global mlx_0_1 00:29:47.506 valid_lft forever preferred_lft forever 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:47.506 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:47.766 192.168.100.9' 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:47.766 192.168.100.9' 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # head -n 1 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:47.766 192.168.100.9' 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # tail -n +2 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # head -n 1 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3464680 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3464680 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3464680 ']' 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.766 16:06:33 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:47.766 [2024-11-17 16:06:33.235041] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:47.766 [2024-11-17 16:06:33.235164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.025 [2024-11-17 16:06:33.369310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.026 [2024-11-17 16:06:33.476800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.026 [2024-11-17 16:06:33.476847] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.026 [2024-11-17 16:06:33.476861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.026 [2024-11-17 16:06:33.476874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.026 [2024-11-17 16:06:33.476884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.026 [2024-11-17 16:06:33.479325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.026 [2024-11-17 16:06:33.479399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.026 [2024-11-17 16:06:33.479463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.026 [2024-11-17 16:06:33.479471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.594 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.594 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:48.594 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:29:48.594 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.594 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:48.594 [2024-11-17 16:06:34.074155] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f6a8b5b3940) succeed. 00:29:48.594 [2024-11-17 16:06:34.083710] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f6a8b56f940) succeed. 00:29:48.853 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.853 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:48.853 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.853 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.113 Malloc0 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.113 [2024-11-17 16:06:34.509093] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.113 [ 00:29:49.113 { 00:29:49.113 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:49.113 "subtype": "Discovery", 00:29:49.113 "listen_addresses": [ 00:29:49.113 { 00:29:49.113 "trtype": "RDMA", 00:29:49.113 "adrfam": "IPv4", 00:29:49.113 "traddr": "192.168.100.8", 00:29:49.113 "trsvcid": "4420" 00:29:49.113 } 00:29:49.113 ], 00:29:49.113 "allow_any_host": true, 00:29:49.113 "hosts": [] 00:29:49.113 }, 00:29:49.113 { 00:29:49.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.113 "subtype": "NVMe", 00:29:49.113 "listen_addresses": [ 00:29:49.113 { 00:29:49.113 "trtype": "RDMA", 00:29:49.113 "adrfam": "IPv4", 00:29:49.113 "traddr": "192.168.100.8", 00:29:49.113 "trsvcid": "4420" 00:29:49.113 } 00:29:49.113 ], 00:29:49.113 "allow_any_host": true, 00:29:49.113 "hosts": [], 00:29:49.113 "serial_number": "SPDK00000000000001", 00:29:49.113 "model_number": "SPDK bdev Controller", 00:29:49.113 "max_namespaces": 32, 00:29:49.113 "min_cntlid": 1, 00:29:49.113 "max_cntlid": 65519, 00:29:49.113 "namespaces": [ 00:29:49.113 { 00:29:49.113 "nsid": 1, 00:29:49.113 "bdev_name": "Malloc0", 00:29:49.113 "name": "Malloc0", 00:29:49.113 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:49.113 "eui64": "ABCDEF0123456789", 00:29:49.113 "uuid": "ee8860ba-6d31-45ea-8745-1531527c2757" 00:29:49.113 } 00:29:49.113 ] 00:29:49.113 } 00:29:49.113 ] 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.113 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:49.113 [2024-11-17 16:06:34.589672] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:49.113 [2024-11-17 16:06:34.589739] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464968 ] 00:29:49.376 [2024-11-17 16:06:34.671716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:49.376 [2024-11-17 16:06:34.671830] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:29:49.376 [2024-11-17 16:06:34.671861] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:29:49.376 [2024-11-17 16:06:34.671870] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:29:49.376 [2024-11-17 16:06:34.671920] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:49.376 [2024-11-17 16:06:34.682459] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:29:49.376 [2024-11-17 16:06:34.696862] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:49.376 [2024-11-17 16:06:34.696883] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:29:49.376 [2024-11-17 16:06:34.696908] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.696923] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.696935] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.696943] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.696953] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.696965] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.696975] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.696983] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.696993] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697001] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697011] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697019] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697029] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697039] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697048] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697057] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697066] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697074] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697085] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697094] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697103] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697111] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697121] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697129] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697145] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697154] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697163] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697171] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697182] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697192] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697202] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697209] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:29:49.376 [2024-11-17 16:06:34.697221] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:49.376 [2024-11-17 16:06:34.697228] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:29:49.376 [2024-11-17 16:06:34.697263] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.697283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x184f00 00:29:49.376 [2024-11-17 16:06:34.702579] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.376 [2024-11-17 16:06:34.702603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:49.376 [2024-11-17 16:06:34.702620] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.702633] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:49.376 [2024-11-17 16:06:34.702657] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:49.376 [2024-11-17 16:06:34.702668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:49.376 [2024-11-17 16:06:34.702693] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.702708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.376 [2024-11-17 16:06:34.702743] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.376 [2024-11-17 16:06:34.702753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:29:49.376 [2024-11-17 16:06:34.702770] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:49.376 [2024-11-17 16:06:34.702782] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.702794] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:49.376 [2024-11-17 16:06:34.702806] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.702822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.376 [2024-11-17 16:06:34.702837] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.376 [2024-11-17 16:06:34.702848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:29:49.376 [2024-11-17 16:06:34.702858] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:49.376 [2024-11-17 16:06:34.702869] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.702880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:49.376 [2024-11-17 16:06:34.702896] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.376 [2024-11-17 16:06:34.702907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.376 [2024-11-17 16:06:34.702932] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.376 [2024-11-17 16:06:34.702940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.702952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:49.377 [2024-11-17 16:06:34.702964] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.702978] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.702990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.377 [2024-11-17 16:06:34.703013] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.377 [2024-11-17 16:06:34.703022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.703035] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:49.377 [2024-11-17 16:06:34.703050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:49.377 [2024-11-17 16:06:34.703061] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:49.377 [2024-11-17 16:06:34.703183] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:49.377 [2024-11-17 16:06:34.703194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:49.377 [2024-11-17 16:06:34.703210] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.377 [2024-11-17 16:06:34.703254] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.377 [2024-11-17 16:06:34.703263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.703274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:49.377 [2024-11-17 16:06:34.703284] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703298] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.377 [2024-11-17 16:06:34.703351] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.377 [2024-11-17 16:06:34.703360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.703373] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:49.377 [2024-11-17 16:06:34.703383] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:49.377 [2024-11-17 16:06:34.703394] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:49.377 [2024-11-17 16:06:34.703425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:49.377 [2024-11-17 16:06:34.703446] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184f00 00:29:49.377 [2024-11-17 16:06:34.703521] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.377 [2024-11-17 16:06:34.703532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.703548] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:49.377 [2024-11-17 16:06:34.703560] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:49.377 [2024-11-17 16:06:34.703577] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:49.377 [2024-11-17 16:06:34.703589] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:49.377 [2024-11-17 16:06:34.703598] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:49.377 [2024-11-17 16:06:34.703609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:49.377 [2024-11-17 16:06:34.703618] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:49.377 [2024-11-17 16:06:34.703650] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703666] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.377 [2024-11-17 16:06:34.703693] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.377 [2024-11-17 16:06:34.703704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.703716] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.377 [2024-11-17 16:06:34.703740] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.377 [2024-11-17 16:06:34.703762] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.377 [2024-11-17 16:06:34.703783] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703795] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.377 [2024-11-17 16:06:34.703803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:49.377 [2024-11-17 16:06:34.703814] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:49.377 [2024-11-17 16:06:34.703849] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.377 [2024-11-17 16:06:34.703883] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.377 [2024-11-17 16:06:34.703892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.703903] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:49.377 [2024-11-17 16:06:34.703913] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:49.377 [2024-11-17 16:06:34.703927] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703944] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.703960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184f00 00:29:49.377 [2024-11-17 16:06:34.703995] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.377 [2024-11-17 16:06:34.704007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.704023] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.704043] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:49.377 [2024-11-17 16:06:34.704089] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.704105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x400 key:0x184f00 00:29:49.377 [2024-11-17 16:06:34.704116] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.704135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.377 [2024-11-17 16:06:34.704177] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.377 [2024-11-17 16:06:34.704188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.704217] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.704234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x184f00 00:29:49.377 [2024-11-17 16:06:34.704243] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.704255] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.377 [2024-11-17 16:06:34.704263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.704273] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.704282] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.377 [2024-11-17 16:06:34.704292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:49.377 [2024-11-17 16:06:34.704308] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184f00 00:29:49.377 [2024-11-17 16:06:34.704324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x184f00 00:29:49.378 [2024-11-17 16:06:34.704333] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x184f00 00:29:49.378 [2024-11-17 16:06:34.704365] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.378 [2024-11-17 16:06:34.704373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:49.378 [2024-11-17 16:06:34.704394] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x184f00 00:29:49.378 ===================================================== 00:29:49.378 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:49.378 ===================================================== 00:29:49.378 Controller Capabilities/Features 00:29:49.378 ================================ 00:29:49.378 Vendor ID: 0000 00:29:49.378 Subsystem Vendor ID: 0000 00:29:49.378 Serial Number: .................... 00:29:49.378 Model Number: ........................................ 00:29:49.378 Firmware Version: 25.01 00:29:49.378 Recommended Arb Burst: 0 00:29:49.378 IEEE OUI Identifier: 00 00 00 00:29:49.378 Multi-path I/O 00:29:49.378 May have multiple subsystem ports: No 00:29:49.378 May have multiple controllers: No 00:29:49.378 Associated with SR-IOV VF: No 00:29:49.378 Max Data Transfer Size: 131072 00:29:49.378 Max Number of Namespaces: 0 00:29:49.378 Max Number of I/O Queues: 1024 00:29:49.378 NVMe Specification Version (VS): 1.3 00:29:49.378 NVMe Specification Version (Identify): 1.3 00:29:49.378 Maximum Queue Entries: 128 00:29:49.378 Contiguous Queues Required: Yes 00:29:49.378 Arbitration Mechanisms Supported 00:29:49.378 Weighted Round Robin: Not Supported 00:29:49.378 Vendor Specific: Not Supported 00:29:49.378 Reset Timeout: 15000 ms 00:29:49.378 Doorbell Stride: 4 bytes 00:29:49.378 NVM Subsystem Reset: Not Supported 00:29:49.378 Command Sets Supported 00:29:49.378 NVM Command Set: Supported 00:29:49.378 Boot Partition: Not Supported 00:29:49.378 Memory Page Size Minimum: 4096 bytes 00:29:49.378 Memory Page Size Maximum: 4096 bytes 00:29:49.378 Persistent Memory Region: Not Supported 00:29:49.378 Optional Asynchronous Events Supported 00:29:49.378 Namespace Attribute Notices: Not Supported 00:29:49.378 Firmware Activation Notices: Not Supported 00:29:49.378 ANA Change Notices: Not Supported 00:29:49.378 PLE Aggregate Log Change Notices: Not Supported 00:29:49.378 LBA Status Info Alert Notices: Not Supported 00:29:49.378 EGE Aggregate Log Change Notices: Not Supported 00:29:49.378 Normal NVM Subsystem Shutdown event: Not Supported 00:29:49.378 Zone Descriptor Change Notices: Not Supported 00:29:49.378 Discovery Log Change Notices: Supported 00:29:49.378 Controller Attributes 00:29:49.378 128-bit Host Identifier: Not Supported 00:29:49.378 Non-Operational Permissive Mode: Not Supported 00:29:49.378 NVM Sets: Not Supported 00:29:49.378 Read Recovery Levels: Not Supported 00:29:49.378 Endurance Groups: Not Supported 00:29:49.378 Predictable Latency Mode: Not Supported 00:29:49.378 Traffic Based Keep ALive: Not Supported 00:29:49.378 Namespace Granularity: Not Supported 00:29:49.378 SQ Associations: Not Supported 00:29:49.378 UUID List: Not Supported 00:29:49.378 Multi-Domain Subsystem: Not Supported 00:29:49.378 Fixed Capacity Management: Not Supported 00:29:49.378 Variable Capacity Management: Not Supported 00:29:49.378 Delete Endurance Group: Not Supported 00:29:49.378 Delete NVM Set: Not Supported 00:29:49.378 Extended LBA Formats Supported: Not Supported 00:29:49.378 Flexible Data Placement Supported: Not Supported 00:29:49.378 00:29:49.378 Controller Memory Buffer Support 00:29:49.378 ================================ 00:29:49.378 Supported: No 00:29:49.378 00:29:49.378 Persistent Memory Region Support 00:29:49.378 ================================ 00:29:49.378 Supported: No 00:29:49.378 00:29:49.378 Admin Command Set Attributes 00:29:49.378 ============================ 00:29:49.378 Security Send/Receive: Not Supported 00:29:49.378 Format NVM: Not Supported 00:29:49.378 Firmware Activate/Download: Not Supported 00:29:49.378 Namespace Management: Not Supported 00:29:49.378 Device Self-Test: Not Supported 00:29:49.378 Directives: Not Supported 00:29:49.378 NVMe-MI: Not Supported 00:29:49.378 Virtualization Management: Not Supported 00:29:49.378 Doorbell Buffer Config: Not Supported 00:29:49.378 Get LBA Status Capability: Not Supported 00:29:49.378 Command & Feature Lockdown Capability: Not Supported 00:29:49.378 Abort Command Limit: 1 00:29:49.378 Async Event Request Limit: 4 00:29:49.378 Number of Firmware Slots: N/A 00:29:49.378 Firmware Slot 1 Read-Only: N/A 00:29:49.378 Firmware Activation Without Reset: N/A 00:29:49.378 Multiple Update Detection Support: N/A 00:29:49.378 Firmware Update Granularity: No Information Provided 00:29:49.378 Per-Namespace SMART Log: No 00:29:49.378 Asymmetric Namespace Access Log Page: Not Supported 00:29:49.378 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:49.378 Command Effects Log Page: Not Supported 00:29:49.378 Get Log Page Extended Data: Supported 00:29:49.378 Telemetry Log Pages: Not Supported 00:29:49.378 Persistent Event Log Pages: Not Supported 00:29:49.378 Supported Log Pages Log Page: May Support 00:29:49.378 Commands Supported & Effects Log Page: Not Supported 00:29:49.378 Feature Identifiers & Effects Log Page:May Support 00:29:49.378 NVMe-MI Commands & Effects Log Page: May Support 00:29:49.378 Data Area 4 for Telemetry Log: Not Supported 00:29:49.378 Error Log Page Entries Supported: 128 00:29:49.378 Keep Alive: Not Supported 00:29:49.378 00:29:49.378 NVM Command Set Attributes 00:29:49.378 ========================== 00:29:49.378 Submission Queue Entry Size 00:29:49.378 Max: 1 00:29:49.378 Min: 1 00:29:49.378 Completion Queue Entry Size 00:29:49.378 Max: 1 00:29:49.378 Min: 1 00:29:49.378 Number of Namespaces: 0 00:29:49.378 Compare Command: Not Supported 00:29:49.378 Write Uncorrectable Command: Not Supported 00:29:49.378 Dataset Management Command: Not Supported 00:29:49.378 Write Zeroes Command: Not Supported 00:29:49.378 Set Features Save Field: Not Supported 00:29:49.378 Reservations: Not Supported 00:29:49.378 Timestamp: Not Supported 00:29:49.378 Copy: Not Supported 00:29:49.378 Volatile Write Cache: Not Present 00:29:49.378 Atomic Write Unit (Normal): 1 00:29:49.378 Atomic Write Unit (PFail): 1 00:29:49.378 Atomic Compare & Write Unit: 1 00:29:49.378 Fused Compare & Write: Supported 00:29:49.378 Scatter-Gather List 00:29:49.378 SGL Command Set: Supported 00:29:49.378 SGL Keyed: Supported 00:29:49.378 SGL Bit Bucket Descriptor: Not Supported 00:29:49.378 SGL Metadata Pointer: Not Supported 00:29:49.378 Oversized SGL: Not Supported 00:29:49.378 SGL Metadata Address: Not Supported 00:29:49.378 SGL Offset: Supported 00:29:49.378 Transport SGL Data Block: Not Supported 00:29:49.378 Replay Protected Memory Block: Not Supported 00:29:49.378 00:29:49.378 Firmware Slot Information 00:29:49.378 ========================= 00:29:49.378 Active slot: 0 00:29:49.378 00:29:49.378 00:29:49.378 Error Log 00:29:49.378 ========= 00:29:49.378 00:29:49.378 Active Namespaces 00:29:49.378 ================= 00:29:49.378 Discovery Log Page 00:29:49.378 ================== 00:29:49.378 Generation Counter: 2 00:29:49.378 Number of Records: 2 00:29:49.378 Record Format: 0 00:29:49.378 00:29:49.378 Discovery Log Entry 0 00:29:49.378 ---------------------- 00:29:49.378 Transport Type: 1 (RDMA) 00:29:49.378 Address Family: 1 (IPv4) 00:29:49.378 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:49.378 Entry Flags: 00:29:49.378 Duplicate Returned Information: 1 00:29:49.378 Explicit Persistent Connection Support for Discovery: 1 00:29:49.378 Transport Requirements: 00:29:49.378 Secure Channel: Not Required 00:29:49.378 Port ID: 0 (0x0000) 00:29:49.378 Controller ID: 65535 (0xffff) 00:29:49.378 Admin Max SQ Size: 128 00:29:49.378 Transport Service Identifier: 4420 00:29:49.378 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:49.378 Transport Address: 192.168.100.8 00:29:49.378 Transport Specific Address Subtype - RDMA 00:29:49.378 RDMA QP Service Type: 1 (Reliable Connected) 00:29:49.378 RDMA Provider Type: 1 (No provider specified) 00:29:49.378 RDMA CM Service: 1 (RDMA_CM) 00:29:49.378 Discovery Log Entry 1 00:29:49.378 ---------------------- 00:29:49.378 Transport Type: 1 (RDMA) 00:29:49.378 Address Family: 1 (IPv4) 00:29:49.378 Subsystem Type: 2 (NVM Subsystem) 00:29:49.378 Entry Flags: 00:29:49.378 Duplicate Returned Information: 0 00:29:49.379 Explicit Persistent Connection Support for Discovery: 0 00:29:49.379 Transport Requirements: 00:29:49.379 Secure Channel: Not Required 00:29:49.379 Port ID: 0 (0x0000) 00:29:49.379 Controller ID: 65535 (0xffff) 00:29:49.379 Admin Max SQ Size: [2024-11-17 16:06:34.704508] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:49.379 [2024-11-17 16:06:34.704547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.704558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.704576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.704592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.704608] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.704644] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.704654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.704669] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.704696] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704712] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.704725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.704739] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:49.379 [2024-11-17 16:06:34.704753] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:49.379 [2024-11-17 16:06:34.704762] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704776] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.704810] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.704818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.704829] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704842] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.704881] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.704892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.704901] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704917] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.704950] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.704958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.704971] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704983] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.704996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705020] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.705039] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705054] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705088] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.705109] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705121] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705151] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.705170] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705187] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705230] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.705249] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705261] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705294] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.705313] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705327] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705370] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.705389] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705401] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705442] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.705461] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705479] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705517] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.705536] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705547] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705581] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.705600] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705614] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705653] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:29:49.379 [2024-11-17 16:06:34.705675] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705688] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.379 [2024-11-17 16:06:34.705701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.379 [2024-11-17 16:06:34.705719] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.379 [2024-11-17 16:06:34.705729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.705738] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.705752] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.705763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.705787] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.705795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.705806] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.705818] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.705832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.705846] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.705858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.705867] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.705886] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.705897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.705922] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.705930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.705941] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.705952] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.705967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.705989] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.705999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.706008] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706024] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.706060] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.706069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.706083] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706096] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.706128] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.706139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.706147] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706161] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.706193] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.706203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.706214] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706226] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.706268] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.706279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.706288] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706304] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.706338] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.706347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.706358] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706374] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.706408] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.706418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.706427] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706444] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.706478] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.706486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.706498] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706510] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.706530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.706544] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.706555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.710571] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.710603] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.710622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.380 [2024-11-17 16:06:34.710650] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.380 [2024-11-17 16:06:34.710660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:000c p:0 m:0 dnr:0 00:29:49.380 [2024-11-17 16:06:34.710673] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x184f00 00:29:49.380 [2024-11-17 16:06:34.710683] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:29:49.380 128 00:29:49.380 Transport Service Identifier: 4420 00:29:49.380 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:49.380 Transport Address: 192.168.100.8 00:29:49.380 Transport Specific Address Subtype - RDMA 00:29:49.380 RDMA QP Service Type: 1 (Reliable Connected) 00:29:49.380 RDMA Provider Type: 1 (No provider specified) 00:29:49.380 RDMA CM Service: 1 (RDMA_CM) 00:29:49.381 16:06:34 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:49.381 [2024-11-17 16:06:34.872623] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:49.381 [2024-11-17 16:06:34.872694] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3464972 ] 00:29:49.643 [2024-11-17 16:06:34.952821] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:49.643 [2024-11-17 16:06:34.952934] nvme_rdma.c:2206:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:29:49.643 [2024-11-17 16:06:34.952963] nvme_rdma.c:1204:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:29:49.643 [2024-11-17 16:06:34.952972] nvme_rdma.c:1208:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:29:49.643 [2024-11-17 16:06:34.953016] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:49.643 [2024-11-17 16:06:34.963073] nvme_rdma.c: 427:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:29:49.643 [2024-11-17 16:06:34.977855] nvme_rdma.c:1090:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:49.643 [2024-11-17 16:06:34.977875] nvme_rdma.c:1095:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:29:49.643 [2024-11-17 16:06:34.977899] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.977910] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.977922] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.977931] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.977942] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.977951] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.977960] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.977969] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.977978] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.977987] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.977996] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978004] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978014] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978024] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978033] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978042] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978051] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978059] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978071] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978079] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978089] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978097] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978106] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978114] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978131] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978139] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978149] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978157] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978168] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978177] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978187] nvme_rdma.c: 878:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978195] nvme_rdma.c:1109:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:29:49.643 [2024-11-17 16:06:34.978206] nvme_rdma.c:1112:nvme_rdma_connect_established: *DEBUG*: rc =0 00:29:49.643 [2024-11-17 16:06:34.978213] nvme_rdma.c:1117:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:29:49.643 [2024-11-17 16:06:34.978243] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.978261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cedc0 len:0x400 key:0x184f00 00:29:49.643 [2024-11-17 16:06:34.983580] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.643 [2024-11-17 16:06:34.983602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:49.643 [2024-11-17 16:06:34.983620] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.983632] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:49.643 [2024-11-17 16:06:34.983649] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:49.643 [2024-11-17 16:06:34.983660] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:49.643 [2024-11-17 16:06:34.983681] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.983695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.643 [2024-11-17 16:06:34.983723] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.643 [2024-11-17 16:06:34.983733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:29:49.643 [2024-11-17 16:06:34.983747] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:49.643 [2024-11-17 16:06:34.983760] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.983773] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:49.643 [2024-11-17 16:06:34.983784] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.983800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.643 [2024-11-17 16:06:34.983820] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.643 [2024-11-17 16:06:34.983830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:29:49.643 [2024-11-17 16:06:34.983841] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:49.643 [2024-11-17 16:06:34.983852] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.983862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:49.643 [2024-11-17 16:06:34.983877] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.983888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.643 [2024-11-17 16:06:34.983914] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.643 [2024-11-17 16:06:34.983922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:49.643 [2024-11-17 16:06:34.983934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:49.643 [2024-11-17 16:06:34.983947] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.983962] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.643 [2024-11-17 16:06:34.983974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.643 [2024-11-17 16:06:34.983998] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.644 [2024-11-17 16:06:34.984006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:49.644 [2024-11-17 16:06:34.984020] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:49.644 [2024-11-17 16:06:34.984029] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:49.644 [2024-11-17 16:06:34.984040] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984055] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:49.644 [2024-11-17 16:06:34.984167] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:49.644 [2024-11-17 16:06:34.984178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:49.644 [2024-11-17 16:06:34.984194] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.644 [2024-11-17 16:06:34.984238] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.644 [2024-11-17 16:06:34.984247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:49.644 [2024-11-17 16:06:34.984258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:49.644 [2024-11-17 16:06:34.984267] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984281] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.644 [2024-11-17 16:06:34.984321] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.644 [2024-11-17 16:06:34.984330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:49.644 [2024-11-17 16:06:34.984342] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:49.644 [2024-11-17 16:06:34.984351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.984361] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984373] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:49.644 [2024-11-17 16:06:34.984391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.984414] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184f00 00:29:49.644 [2024-11-17 16:06:34.984496] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.644 [2024-11-17 16:06:34.984506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:49.644 [2024-11-17 16:06:34.984522] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:49.644 [2024-11-17 16:06:34.984536] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:49.644 [2024-11-17 16:06:34.984545] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:49.644 [2024-11-17 16:06:34.984557] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:49.644 [2024-11-17 16:06:34.984571] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:49.644 [2024-11-17 16:06:34.984582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.984591] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984608] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.984621] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.644 [2024-11-17 16:06:34.984675] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.644 [2024-11-17 16:06:34.984686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:49.644 [2024-11-17 16:06:34.984702] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0200 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.644 [2024-11-17 16:06:34.984728] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.644 [2024-11-17 16:06:34.984749] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.644 [2024-11-17 16:06:34.984769] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.644 [2024-11-17 16:06:34.984789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.984803] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984817] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.984833] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.644 [2024-11-17 16:06:34.984866] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.644 [2024-11-17 16:06:34.984874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:29:49.644 [2024-11-17 16:06:34.984885] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:49.644 [2024-11-17 16:06:34.984895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.984905] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3e8 length 0x10 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.984928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.984938] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.984956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.644 [2024-11-17 16:06:34.984979] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.644 [2024-11-17 16:06:34.984989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:29:49.644 [2024-11-17 16:06:34.985067] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.985078] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf410 length 0x10 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.985094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.985116] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.985129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x184f00 00:29:49.644 [2024-11-17 16:06:34.985177] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.644 [2024-11-17 16:06:34.985185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:49.644 [2024-11-17 16:06:34.985214] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:49.644 [2024-11-17 16:06:34.985231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:49.644 [2024-11-17 16:06:34.985244] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf438 length 0x10 lkey 0x184f00 00:29:49.644 [2024-11-17 16:06:34.985255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:49.645 [2024-11-17 16:06:34.985273] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184f00 00:29:49.645 [2024-11-17 16:06:34.985346] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.985354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.985378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:49.645 [2024-11-17 16:06:34.985387] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf460 length 0x10 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:49.645 [2024-11-17 16:06:34.985415] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x184f00 00:29:49.645 [2024-11-17 16:06:34.985458] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.985469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.985487] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:49.645 [2024-11-17 16:06:34.985498] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf488 length 0x10 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:49.645 [2024-11-17 16:06:34.985528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:49.645 [2024-11-17 16:06:34.985539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:49.645 [2024-11-17 16:06:34.985550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:49.645 [2024-11-17 16:06:34.985559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:49.645 [2024-11-17 16:06:34.985577] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:49.645 [2024-11-17 16:06:34.985586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:49.645 [2024-11-17 16:06:34.985597] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:49.645 [2024-11-17 16:06:34.985626] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985640] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.645 [2024-11-17 16:06:34.985651] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:49.645 [2024-11-17 16:06:34.985679] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.985692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.985705] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4b0 length 0x10 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985717] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.985725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.985735] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf4d8 length 0x10 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985749] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.645 [2024-11-17 16:06:34.985782] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.985792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.985800] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf500 length 0x10 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985814] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.645 [2024-11-17 16:06:34.985860] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.985869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.985882] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf528 length 0x10 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985895] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985909] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.645 [2024-11-17 16:06:34.985924] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.985934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.985943] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf550 length 0x10 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985973] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.985987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x184f00 00:29:49.645 [2024-11-17 16:06:34.986003] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d00c0 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.986015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x184f00 00:29:49.645 [2024-11-17 16:06:34.986031] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.986043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c8000 len:0x200 key:0x184f00 00:29:49.645 [2024-11-17 16:06:34.986061] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.986073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c6000 len:0x1000 key:0x184f00 00:29:49.645 [2024-11-17 16:06:34.986088] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.986097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.986125] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf578 length 0x10 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.986136] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.986147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.986162] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5a0 length 0x10 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.986172] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.986179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.986191] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5c8 length 0x10 lkey 0x184f00 00:29:49.645 [2024-11-17 16:06:34.986200] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.645 [2024-11-17 16:06:34.986209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:49.645 [2024-11-17 16:06:34.986226] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf5f0 length 0x10 lkey 0x184f00 00:29:49.645 ===================================================== 00:29:49.645 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.645 ===================================================== 00:29:49.645 Controller Capabilities/Features 00:29:49.645 ================================ 00:29:49.645 Vendor ID: 8086 00:29:49.645 Subsystem Vendor ID: 8086 00:29:49.645 Serial Number: SPDK00000000000001 00:29:49.645 Model Number: SPDK bdev Controller 00:29:49.645 Firmware Version: 25.01 00:29:49.645 Recommended Arb Burst: 6 00:29:49.645 IEEE OUI Identifier: e4 d2 5c 00:29:49.645 Multi-path I/O 00:29:49.645 May have multiple subsystem ports: Yes 00:29:49.645 May have multiple controllers: Yes 00:29:49.645 Associated with SR-IOV VF: No 00:29:49.645 Max Data Transfer Size: 131072 00:29:49.645 Max Number of Namespaces: 32 00:29:49.645 Max Number of I/O Queues: 127 00:29:49.645 NVMe Specification Version (VS): 1.3 00:29:49.645 NVMe Specification Version (Identify): 1.3 00:29:49.645 Maximum Queue Entries: 128 00:29:49.645 Contiguous Queues Required: Yes 00:29:49.645 Arbitration Mechanisms Supported 00:29:49.645 Weighted Round Robin: Not Supported 00:29:49.645 Vendor Specific: Not Supported 00:29:49.645 Reset Timeout: 15000 ms 00:29:49.645 Doorbell Stride: 4 bytes 00:29:49.645 NVM Subsystem Reset: Not Supported 00:29:49.645 Command Sets Supported 00:29:49.645 NVM Command Set: Supported 00:29:49.645 Boot Partition: Not Supported 00:29:49.645 Memory Page Size Minimum: 4096 bytes 00:29:49.645 Memory Page Size Maximum: 4096 bytes 00:29:49.646 Persistent Memory Region: Not Supported 00:29:49.646 Optional Asynchronous Events Supported 00:29:49.646 Namespace Attribute Notices: Supported 00:29:49.646 Firmware Activation Notices: Not Supported 00:29:49.646 ANA Change Notices: Not Supported 00:29:49.646 PLE Aggregate Log Change Notices: Not Supported 00:29:49.646 LBA Status Info Alert Notices: Not Supported 00:29:49.646 EGE Aggregate Log Change Notices: Not Supported 00:29:49.646 Normal NVM Subsystem Shutdown event: Not Supported 00:29:49.646 Zone Descriptor Change Notices: Not Supported 00:29:49.646 Discovery Log Change Notices: Not Supported 00:29:49.646 Controller Attributes 00:29:49.646 128-bit Host Identifier: Supported 00:29:49.646 Non-Operational Permissive Mode: Not Supported 00:29:49.646 NVM Sets: Not Supported 00:29:49.646 Read Recovery Levels: Not Supported 00:29:49.646 Endurance Groups: Not Supported 00:29:49.646 Predictable Latency Mode: Not Supported 00:29:49.646 Traffic Based Keep ALive: Not Supported 00:29:49.646 Namespace Granularity: Not Supported 00:29:49.646 SQ Associations: Not Supported 00:29:49.646 UUID List: Not Supported 00:29:49.646 Multi-Domain Subsystem: Not Supported 00:29:49.646 Fixed Capacity Management: Not Supported 00:29:49.646 Variable Capacity Management: Not Supported 00:29:49.646 Delete Endurance Group: Not Supported 00:29:49.646 Delete NVM Set: Not Supported 00:29:49.646 Extended LBA Formats Supported: Not Supported 00:29:49.646 Flexible Data Placement Supported: Not Supported 00:29:49.646 00:29:49.646 Controller Memory Buffer Support 00:29:49.646 ================================ 00:29:49.646 Supported: No 00:29:49.646 00:29:49.646 Persistent Memory Region Support 00:29:49.646 ================================ 00:29:49.646 Supported: No 00:29:49.646 00:29:49.646 Admin Command Set Attributes 00:29:49.646 ============================ 00:29:49.646 Security Send/Receive: Not Supported 00:29:49.646 Format NVM: Not Supported 00:29:49.646 Firmware Activate/Download: Not Supported 00:29:49.646 Namespace Management: Not Supported 00:29:49.646 Device Self-Test: Not Supported 00:29:49.646 Directives: Not Supported 00:29:49.646 NVMe-MI: Not Supported 00:29:49.646 Virtualization Management: Not Supported 00:29:49.646 Doorbell Buffer Config: Not Supported 00:29:49.646 Get LBA Status Capability: Not Supported 00:29:49.646 Command & Feature Lockdown Capability: Not Supported 00:29:49.646 Abort Command Limit: 4 00:29:49.646 Async Event Request Limit: 4 00:29:49.646 Number of Firmware Slots: N/A 00:29:49.646 Firmware Slot 1 Read-Only: N/A 00:29:49.646 Firmware Activation Without Reset: N/A 00:29:49.646 Multiple Update Detection Support: N/A 00:29:49.646 Firmware Update Granularity: No Information Provided 00:29:49.646 Per-Namespace SMART Log: No 00:29:49.646 Asymmetric Namespace Access Log Page: Not Supported 00:29:49.646 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:49.646 Command Effects Log Page: Supported 00:29:49.646 Get Log Page Extended Data: Supported 00:29:49.646 Telemetry Log Pages: Not Supported 00:29:49.646 Persistent Event Log Pages: Not Supported 00:29:49.646 Supported Log Pages Log Page: May Support 00:29:49.646 Commands Supported & Effects Log Page: Not Supported 00:29:49.646 Feature Identifiers & Effects Log Page:May Support 00:29:49.646 NVMe-MI Commands & Effects Log Page: May Support 00:29:49.646 Data Area 4 for Telemetry Log: Not Supported 00:29:49.646 Error Log Page Entries Supported: 128 00:29:49.646 Keep Alive: Supported 00:29:49.646 Keep Alive Granularity: 10000 ms 00:29:49.646 00:29:49.646 NVM Command Set Attributes 00:29:49.646 ========================== 00:29:49.646 Submission Queue Entry Size 00:29:49.646 Max: 64 00:29:49.646 Min: 64 00:29:49.646 Completion Queue Entry Size 00:29:49.646 Max: 16 00:29:49.646 Min: 16 00:29:49.646 Number of Namespaces: 32 00:29:49.646 Compare Command: Supported 00:29:49.646 Write Uncorrectable Command: Not Supported 00:29:49.646 Dataset Management Command: Supported 00:29:49.646 Write Zeroes Command: Supported 00:29:49.646 Set Features Save Field: Not Supported 00:29:49.646 Reservations: Supported 00:29:49.646 Timestamp: Not Supported 00:29:49.646 Copy: Supported 00:29:49.646 Volatile Write Cache: Present 00:29:49.646 Atomic Write Unit (Normal): 1 00:29:49.646 Atomic Write Unit (PFail): 1 00:29:49.646 Atomic Compare & Write Unit: 1 00:29:49.646 Fused Compare & Write: Supported 00:29:49.646 Scatter-Gather List 00:29:49.646 SGL Command Set: Supported 00:29:49.646 SGL Keyed: Supported 00:29:49.646 SGL Bit Bucket Descriptor: Not Supported 00:29:49.646 SGL Metadata Pointer: Not Supported 00:29:49.646 Oversized SGL: Not Supported 00:29:49.646 SGL Metadata Address: Not Supported 00:29:49.646 SGL Offset: Supported 00:29:49.646 Transport SGL Data Block: Not Supported 00:29:49.646 Replay Protected Memory Block: Not Supported 00:29:49.646 00:29:49.646 Firmware Slot Information 00:29:49.646 ========================= 00:29:49.646 Active slot: 1 00:29:49.646 Slot 1 Firmware Revision: 25.01 00:29:49.646 00:29:49.646 00:29:49.646 Commands Supported and Effects 00:29:49.646 ============================== 00:29:49.646 Admin Commands 00:29:49.646 -------------- 00:29:49.646 Get Log Page (02h): Supported 00:29:49.646 Identify (06h): Supported 00:29:49.646 Abort (08h): Supported 00:29:49.646 Set Features (09h): Supported 00:29:49.646 Get Features (0Ah): Supported 00:29:49.646 Asynchronous Event Request (0Ch): Supported 00:29:49.646 Keep Alive (18h): Supported 00:29:49.646 I/O Commands 00:29:49.646 ------------ 00:29:49.646 Flush (00h): Supported LBA-Change 00:29:49.646 Write (01h): Supported LBA-Change 00:29:49.646 Read (02h): Supported 00:29:49.646 Compare (05h): Supported 00:29:49.646 Write Zeroes (08h): Supported LBA-Change 00:29:49.646 Dataset Management (09h): Supported LBA-Change 00:29:49.646 Copy (19h): Supported LBA-Change 00:29:49.646 00:29:49.646 Error Log 00:29:49.646 ========= 00:29:49.646 00:29:49.646 Arbitration 00:29:49.646 =========== 00:29:49.646 Arbitration Burst: 1 00:29:49.646 00:29:49.646 Power Management 00:29:49.646 ================ 00:29:49.646 Number of Power States: 1 00:29:49.646 Current Power State: Power State #0 00:29:49.646 Power State #0: 00:29:49.646 Max Power: 0.00 W 00:29:49.646 Non-Operational State: Operational 00:29:49.646 Entry Latency: Not Reported 00:29:49.646 Exit Latency: Not Reported 00:29:49.646 Relative Read Throughput: 0 00:29:49.646 Relative Read Latency: 0 00:29:49.646 Relative Write Throughput: 0 00:29:49.646 Relative Write Latency: 0 00:29:49.646 Idle Power: Not Reported 00:29:49.646 Active Power: Not Reported 00:29:49.646 Non-Operational Permissive Mode: Not Supported 00:29:49.646 00:29:49.646 Health Information 00:29:49.646 ================== 00:29:49.646 Critical Warnings: 00:29:49.646 Available Spare Space: OK 00:29:49.646 Temperature: OK 00:29:49.646 Device Reliability: OK 00:29:49.646 Read Only: No 00:29:49.646 Volatile Memory Backup: OK 00:29:49.646 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:49.646 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:49.646 Available Spare: 0% 00:29:49.646 Available Spare Threshold: 0% 00:29:49.646 Life Percentage [2024-11-17 16:06:34.986356] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x184f00 00:29:49.646 [2024-11-17 16:06:34.986369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.986392] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.986402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986413] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf618 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986456] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:49.647 [2024-11-17 16:06:34.986479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986528] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.986579] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.986590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986604] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.986627] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986650] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.986661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986670] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:49.647 [2024-11-17 16:06:34.986683] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:49.647 [2024-11-17 16:06:34.986694] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986710] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.986751] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.986759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986770] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986782] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.986812] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.986822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986831] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986845] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.986878] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.986886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986897] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986908] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.986943] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.986954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.986962] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986978] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.986989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.987007] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.987015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.987026] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987040] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.987074] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.987086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.987095] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf280 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987108] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.987141] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.987150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.987160] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2a8 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987171] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.987207] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.987223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.987232] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2d0 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987245] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.987281] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.987290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.987300] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf2f8 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987312] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.987343] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.987355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.987364] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf320 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987377] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.987409] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.987423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.987434] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf348 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987445] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.987476] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.987489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.987498] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf370 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987515] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.987526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.987549] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.647 [2024-11-17 16:06:34.987557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:29:49.647 [2024-11-17 16:06:34.991585] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf398 length 0x10 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.991608] nvme_rdma.c:2261:_nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x184f00 00:29:49.647 [2024-11-17 16:06:34.991624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:29:49.647 [2024-11-17 16:06:34.991645] nvme_rdma.c:2501:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:29:49.648 [2024-11-17 16:06:34.991658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:29:49.648 [2024-11-17 16:06:34.991667] nvme_rdma.c:2394:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf3c0 length 0x10 lkey 0x184f00 00:29:49.648 [2024-11-17 16:06:34.991681] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:29:49.648 Used: 0% 00:29:49.648 Data Units Read: 0 00:29:49.648 Data Units Written: 0 00:29:49.648 Host Read Commands: 0 00:29:49.648 Host Write Commands: 0 00:29:49.648 Controller Busy Time: 0 minutes 00:29:49.648 Power Cycles: 0 00:29:49.648 Power On Hours: 0 hours 00:29:49.648 Unsafe Shutdowns: 0 00:29:49.648 Unrecoverable Media Errors: 0 00:29:49.648 Lifetime Error Log Entries: 0 00:29:49.648 Warning Temperature Time: 0 minutes 00:29:49.648 Critical Temperature Time: 0 minutes 00:29:49.648 00:29:49.648 Number of Queues 00:29:49.648 ================ 00:29:49.648 Number of I/O Submission Queues: 127 00:29:49.648 Number of I/O Completion Queues: 127 00:29:49.648 00:29:49.648 Active Namespaces 00:29:49.648 ================= 00:29:49.648 Namespace ID:1 00:29:49.648 Error Recovery Timeout: Unlimited 00:29:49.648 Command Set Identifier: NVM (00h) 00:29:49.648 Deallocate: Supported 00:29:49.648 Deallocated/Unwritten Error: Not Supported 00:29:49.648 Deallocated Read Value: Unknown 00:29:49.648 Deallocate in Write Zeroes: Not Supported 00:29:49.648 Deallocated Guard Field: 0xFFFF 00:29:49.648 Flush: Supported 00:29:49.648 Reservation: Supported 00:29:49.648 Namespace Sharing Capabilities: Multiple Controllers 00:29:49.648 Size (in LBAs): 131072 (0GiB) 00:29:49.648 Capacity (in LBAs): 131072 (0GiB) 00:29:49.648 Utilization (in LBAs): 131072 (0GiB) 00:29:49.648 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:49.648 EUI64: ABCDEF0123456789 00:29:49.648 UUID: ee8860ba-6d31-45ea-8745-1531527c2757 00:29:49.648 Thin Provisioning: Not Supported 00:29:49.648 Per-NS Atomic Units: Yes 00:29:49.648 Atomic Boundary Size (Normal): 0 00:29:49.648 Atomic Boundary Size (PFail): 0 00:29:49.648 Atomic Boundary Offset: 0 00:29:49.648 Maximum Single Source Range Length: 65535 00:29:49.648 Maximum Copy Length: 65535 00:29:49.648 Maximum Source Range Count: 1 00:29:49.648 NGUID/EUI64 Never Reused: No 00:29:49.648 Namespace Write Protected: No 00:29:49.648 Number of LBA Formats: 1 00:29:49.648 Current LBA Format: LBA Format #00 00:29:49.648 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:49.648 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:29:49.648 rmmod nvme_rdma 00:29:49.648 rmmod nvme_fabrics 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3464680 ']' 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3464680 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3464680 ']' 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3464680 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.648 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3464680 00:29:49.907 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:49.907 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:49.907 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3464680' 00:29:49.907 killing process with pid 3464680 00:29:49.907 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3464680 00:29:49.907 16:06:35 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3464680 00:29:51.816 16:06:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:51.816 16:06:37 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:29:51.816 00:29:51.817 real 0m10.804s 00:29:51.817 user 0m14.476s 00:29:51.817 sys 0m5.901s 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:51.817 ************************************ 00:29:51.817 END TEST nvmf_identify 00:29:51.817 ************************************ 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:51.817 ************************************ 00:29:51.817 START TEST nvmf_perf 00:29:51.817 ************************************ 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:29:51.817 * Looking for test storage... 00:29:51.817 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:51.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.817 --rc genhtml_branch_coverage=1 00:29:51.817 --rc genhtml_function_coverage=1 00:29:51.817 --rc genhtml_legend=1 00:29:51.817 --rc geninfo_all_blocks=1 00:29:51.817 --rc geninfo_unexecuted_blocks=1 00:29:51.817 00:29:51.817 ' 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:51.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.817 --rc genhtml_branch_coverage=1 00:29:51.817 --rc genhtml_function_coverage=1 00:29:51.817 --rc genhtml_legend=1 00:29:51.817 --rc geninfo_all_blocks=1 00:29:51.817 --rc geninfo_unexecuted_blocks=1 00:29:51.817 00:29:51.817 ' 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:51.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.817 --rc genhtml_branch_coverage=1 00:29:51.817 --rc genhtml_function_coverage=1 00:29:51.817 --rc genhtml_legend=1 00:29:51.817 --rc geninfo_all_blocks=1 00:29:51.817 --rc geninfo_unexecuted_blocks=1 00:29:51.817 00:29:51.817 ' 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:51.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.817 --rc genhtml_branch_coverage=1 00:29:51.817 --rc genhtml_function_coverage=1 00:29:51.817 --rc genhtml_legend=1 00:29:51.817 --rc geninfo_all_blocks=1 00:29:51.817 --rc geninfo_unexecuted_blocks=1 00:29:51.817 00:29:51.817 ' 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.817 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:51.818 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:51.818 16:06:37 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:58.384 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:58.384 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:58.384 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:58.384 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:58.385 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # rdma_device_init 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:29:58.385 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:58.385 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:58.385 altname enp217s0f0np0 00:29:58.385 altname ens818f0np0 00:29:58.385 inet 192.168.100.8/24 scope global mlx_0_0 00:29:58.385 valid_lft forever preferred_lft forever 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:29:58.385 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:58.385 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:58.385 altname enp217s0f1np1 00:29:58.385 altname ens818f1np1 00:29:58.385 inet 192.168.100.9/24 scope global mlx_0_1 00:29:58.385 valid_lft forever preferred_lft forever 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:29:58.385 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:29:58.644 192.168.100.9' 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:29:58.644 192.168.100.9' 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # head -n 1 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:29:58.644 192.168.100.9' 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # tail -n +2 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # head -n 1 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3468630 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3468630 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3468630 ']' 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:58.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:58.644 16:06:43 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:58.644 [2024-11-17 16:06:44.075022] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:58.644 [2024-11-17 16:06:44.075113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:58.903 [2024-11-17 16:06:44.209697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:58.903 [2024-11-17 16:06:44.310156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:58.903 [2024-11-17 16:06:44.310202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:58.903 [2024-11-17 16:06:44.310214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:58.903 [2024-11-17 16:06:44.310226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:58.903 [2024-11-17 16:06:44.310235] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:58.903 [2024-11-17 16:06:44.312573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.903 [2024-11-17 16:06:44.312646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:58.903 [2024-11-17 16:06:44.312674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.903 [2024-11-17 16:06:44.312681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:59.470 16:06:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:59.470 16:06:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:59.470 16:06:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:59.470 16:06:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:59.470 16:06:44 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:59.470 16:06:44 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:59.470 16:06:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:59.471 16:06:44 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:02.754 16:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:02.754 16:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:02.754 16:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:30:02.754 16:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:03.013 16:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:03.013 16:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:30:03.013 16:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:03.013 16:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:30:03.013 16:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:30:03.272 [2024-11-17 16:06:48.663447] rdma.c:2773:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:30:03.272 [2024-11-17 16:06:48.688207] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002a1c0/0x7fb9b77bd940) succeed. 00:30:03.272 [2024-11-17 16:06:48.697992] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002a340/0x7fb9b7779940) succeed. 00:30:03.530 16:06:48 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:03.530 16:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:03.530 16:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:03.788 16:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:03.788 16:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:04.046 16:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:04.304 [2024-11-17 16:06:49.645511] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:30:04.304 16:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:30:04.563 16:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:30:04.563 16:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:30:04.563 16:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:04.563 16:06:49 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:30:05.938 Initializing NVMe Controllers 00:30:05.938 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:30:05.938 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:30:05.938 Initialization complete. Launching workers. 00:30:05.938 ======================================================== 00:30:05.938 Latency(us) 00:30:05.938 Device Information : IOPS MiB/s Average min max 00:30:05.938 PCIE (0000:d8:00.0) NSID 1 from core 0: 92822.77 362.59 344.10 31.61 8202.53 00:30:05.938 ======================================================== 00:30:05.938 Total : 92822.77 362.59 344.10 31.61 8202.53 00:30:05.938 00:30:05.938 16:06:51 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:09.224 Initializing NVMe Controllers 00:30:09.225 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:09.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:09.225 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:09.225 Initialization complete. Launching workers. 00:30:09.225 ======================================================== 00:30:09.225 Latency(us) 00:30:09.225 Device Information : IOPS MiB/s Average min max 00:30:09.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6112.99 23.88 161.89 57.64 4124.45 00:30:09.225 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4724.00 18.45 210.41 85.20 4146.80 00:30:09.225 ======================================================== 00:30:09.225 Total : 10836.99 42.33 183.04 57.64 4146.80 00:30:09.225 00:30:09.484 16:06:54 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:13.677 Initializing NVMe Controllers 00:30:13.678 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:13.678 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:13.678 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:13.678 Initialization complete. Launching workers. 00:30:13.678 ======================================================== 00:30:13.678 Latency(us) 00:30:13.678 Device Information : IOPS MiB/s Average min max 00:30:13.678 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16170.27 63.17 1970.41 548.22 5739.39 00:30:13.678 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3997.84 15.62 7971.00 5821.36 10040.73 00:30:13.678 ======================================================== 00:30:13.678 Total : 20168.11 78.78 3159.88 548.22 10040.73 00:30:13.678 00:30:13.678 16:06:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:30:13.678 16:06:58 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:17.944 Initializing NVMe Controllers 00:30:17.944 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.944 Controller IO queue size 128, less than required. 00:30:17.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:17.944 Controller IO queue size 128, less than required. 00:30:17.944 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:17.944 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:17.944 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:17.944 Initialization complete. Launching workers. 00:30:17.944 ======================================================== 00:30:17.944 Latency(us) 00:30:17.944 Device Information : IOPS MiB/s Average min max 00:30:17.944 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3283.07 820.77 39397.72 16132.79 295086.52 00:30:17.944 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3437.00 859.25 38654.60 16602.50 407731.94 00:30:17.944 ======================================================== 00:30:17.944 Total : 6720.07 1680.02 39017.65 16132.79 407731.94 00:30:17.944 00:30:17.944 16:07:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:30:18.511 No valid NVMe controllers or AIO or URING devices found 00:30:18.511 Initializing NVMe Controllers 00:30:18.511 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:18.511 Controller IO queue size 128, less than required. 00:30:18.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:18.511 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:18.511 Controller IO queue size 128, less than required. 00:30:18.511 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:18.511 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:18.511 WARNING: Some requested NVMe devices were skipped 00:30:18.511 16:07:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:30:23.780 Initializing NVMe Controllers 00:30:23.780 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.780 Controller IO queue size 128, less than required. 00:30:23.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.780 Controller IO queue size 128, less than required. 00:30:23.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.780 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.780 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:23.780 Initialization complete. Launching workers. 00:30:23.780 00:30:23.780 ==================== 00:30:23.780 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:23.780 RDMA transport: 00:30:23.780 dev name: mlx5_0 00:30:23.780 polls: 319568 00:30:23.780 idle_polls: 317068 00:30:23.780 completions: 36786 00:30:23.780 queued_requests: 1 00:30:23.780 total_send_wrs: 18393 00:30:23.780 send_doorbell_updates: 2294 00:30:23.780 total_recv_wrs: 18520 00:30:23.780 recv_doorbell_updates: 2295 00:30:23.780 --------------------------------- 00:30:23.780 00:30:23.780 ==================== 00:30:23.780 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:23.780 RDMA transport: 00:30:23.780 dev name: mlx5_0 00:30:23.780 polls: 320214 00:30:23.780 idle_polls: 319966 00:30:23.780 completions: 17526 00:30:23.780 queued_requests: 1 00:30:23.780 total_send_wrs: 8763 00:30:23.780 send_doorbell_updates: 236 00:30:23.780 total_recv_wrs: 8890 00:30:23.780 recv_doorbell_updates: 237 00:30:23.780 --------------------------------- 00:30:23.780 ======================================================== 00:30:23.780 Latency(us) 00:30:23.780 Device Information : IOPS MiB/s Average min max 00:30:23.780 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4598.00 1149.50 28417.20 15307.89 237786.01 00:30:23.780 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2190.50 547.62 60103.13 31889.90 410514.87 00:30:23.780 ======================================================== 00:30:23.780 Total : 6788.50 1697.12 38641.55 15307.89 410514.87 00:30:23.780 00:30:23.780 16:07:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:23.780 16:07:08 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.780 16:07:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:23.780 16:07:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:30:23.780 16:07:09 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:30.340 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=06f63132-f42d-4bde-986e-a9d861026741 00:30:30.340 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 06f63132-f42d-4bde-986e-a9d861026741 00:30:30.340 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=06f63132-f42d-4bde-986e-a9d861026741 00:30:30.340 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:30.340 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:30.340 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:30.340 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:30.340 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:30.340 { 00:30:30.340 "uuid": "06f63132-f42d-4bde-986e-a9d861026741", 00:30:30.340 "name": "lvs_0", 00:30:30.340 "base_bdev": "Nvme0n1", 00:30:30.340 "total_data_clusters": 476466, 00:30:30.340 "free_clusters": 476466, 00:30:30.340 "block_size": 512, 00:30:30.340 "cluster_size": 4194304 00:30:30.341 } 00:30:30.341 ]' 00:30:30.341 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="06f63132-f42d-4bde-986e-a9d861026741") .free_clusters' 00:30:30.341 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=476466 00:30:30.341 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="06f63132-f42d-4bde-986e-a9d861026741") .cluster_size' 00:30:30.341 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:30.341 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1905864 00:30:30.341 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1905864 00:30:30.341 1905864 00:30:30.341 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:30:30.341 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:30.341 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 06f63132-f42d-4bde-986e-a9d861026741 lbd_0 20480 00:30:30.598 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=d18640f0-fb85-47c8-abf0-9a61603189fc 00:30:30.598 16:07:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore d18640f0-fb85-47c8-abf0-9a61603189fc lvs_n_0 00:30:32.496 16:07:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=a5916397-4607-492e-93ce-f41a5e6ca960 00:30:32.496 16:07:17 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb a5916397-4607-492e-93ce-f41a5e6ca960 00:30:32.496 16:07:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=a5916397-4607-492e-93ce-f41a5e6ca960 00:30:32.496 16:07:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:32.496 16:07:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:32.496 16:07:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:32.496 16:07:17 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:32.753 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:32.753 { 00:30:32.753 "uuid": "06f63132-f42d-4bde-986e-a9d861026741", 00:30:32.753 "name": "lvs_0", 00:30:32.753 "base_bdev": "Nvme0n1", 00:30:32.753 "total_data_clusters": 476466, 00:30:32.753 "free_clusters": 471346, 00:30:32.753 "block_size": 512, 00:30:32.753 "cluster_size": 4194304 00:30:32.753 }, 00:30:32.753 { 00:30:32.753 "uuid": "a5916397-4607-492e-93ce-f41a5e6ca960", 00:30:32.753 "name": "lvs_n_0", 00:30:32.753 "base_bdev": "d18640f0-fb85-47c8-abf0-9a61603189fc", 00:30:32.753 "total_data_clusters": 5114, 00:30:32.753 "free_clusters": 5114, 00:30:32.753 "block_size": 512, 00:30:32.753 "cluster_size": 4194304 00:30:32.753 } 00:30:32.753 ]' 00:30:32.753 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="a5916397-4607-492e-93ce-f41a5e6ca960") .free_clusters' 00:30:32.753 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:32.753 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="a5916397-4607-492e-93ce-f41a5e6ca960") .cluster_size' 00:30:32.753 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:32.753 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:32.753 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:32.753 20456 00:30:32.754 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:32.754 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5916397-4607-492e-93ce-f41a5e6ca960 lbd_nest_0 20456 00:30:33.012 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=554f141e-a1fb-4215-81c5-b5b2aa688a06 00:30:33.012 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:33.012 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:33.012 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 554f141e-a1fb-4215-81c5-b5b2aa688a06 00:30:33.270 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:30:33.528 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:33.528 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:33.528 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:33.528 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:33.528 16:07:18 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:45.730 Initializing NVMe Controllers 00:30:45.730 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:45.730 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:45.730 Initialization complete. Launching workers. 00:30:45.730 ======================================================== 00:30:45.730 Latency(us) 00:30:45.730 Device Information : IOPS MiB/s Average min max 00:30:45.730 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5201.00 2.54 191.72 78.54 7087.26 00:30:45.730 ======================================================== 00:30:45.730 Total : 5201.00 2.54 191.72 78.54 7087.26 00:30:45.730 00:30:45.730 16:07:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:45.730 16:07:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:30:57.929 Initializing NVMe Controllers 00:30:57.929 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:30:57.929 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:57.929 Initialization complete. Launching workers. 00:30:57.929 ======================================================== 00:30:57.929 Latency(us) 00:30:57.929 Device Information : IOPS MiB/s Average min max 00:30:57.929 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2483.56 310.45 402.20 174.44 7104.81 00:30:57.929 ======================================================== 00:30:57.929 Total : 2483.56 310.45 402.20 174.44 7104.81 00:30:57.929 00:30:57.929 16:07:41 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:57.929 16:07:41 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:57.929 16:07:41 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:07.898 Initializing NVMe Controllers 00:31:07.898 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.898 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:07.898 Initialization complete. Launching workers. 00:31:07.898 ======================================================== 00:31:07.898 Latency(us) 00:31:07.898 Device Information : IOPS MiB/s Average min max 00:31:07.898 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10401.90 5.08 3075.63 1170.39 8308.12 00:31:07.898 ======================================================== 00:31:07.898 Total : 10401.90 5.08 3075.63 1170.39 8308.12 00:31:07.898 00:31:08.156 16:07:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:08.156 16:07:53 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:20.356 Initializing NVMe Controllers 00:31:20.356 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.356 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:20.356 Initialization complete. Launching workers. 00:31:20.356 ======================================================== 00:31:20.356 Latency(us) 00:31:20.356 Device Information : IOPS MiB/s Average min max 00:31:20.356 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4000.11 500.01 8005.47 3899.27 26130.92 00:31:20.356 ======================================================== 00:31:20.356 Total : 4000.11 500.01 8005.47 3899.27 26130.92 00:31:20.356 00:31:20.356 16:08:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:20.356 16:08:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:20.356 16:08:05 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:32.553 Initializing NVMe Controllers 00:31:32.553 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:32.553 Controller IO queue size 128, less than required. 00:31:32.554 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:32.554 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:32.554 Initialization complete. Launching workers. 00:31:32.554 ======================================================== 00:31:32.554 Latency(us) 00:31:32.554 Device Information : IOPS MiB/s Average min max 00:31:32.554 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16711.04 8.16 7656.48 2304.52 15850.59 00:31:32.554 ======================================================== 00:31:32.554 Total : 16711.04 8.16 7656.48 2304.52 15850.59 00:31:32.554 00:31:32.554 16:08:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:32.554 16:08:16 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:31:42.596 Initializing NVMe Controllers 00:31:42.596 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:31:42.596 Controller IO queue size 128, less than required. 00:31:42.596 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:42.596 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:42.596 Initialization complete. Launching workers. 00:31:42.596 ======================================================== 00:31:42.596 Latency(us) 00:31:42.596 Device Information : IOPS MiB/s Average min max 00:31:42.596 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9867.80 1233.47 12979.66 3789.69 93460.60 00:31:42.596 ======================================================== 00:31:42.596 Total : 9867.80 1233.47 12979.66 3789.69 93460.60 00:31:42.596 00:31:42.854 16:08:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:43.113 16:08:28 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 554f141e-a1fb-4215-81c5-b5b2aa688a06 00:31:43.679 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:43.936 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d18640f0-fb85-47c8-abf0-9a61603189fc 00:31:44.195 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:44.453 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:44.453 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:44.453 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:31:44.454 rmmod nvme_rdma 00:31:44.454 rmmod nvme_fabrics 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3468630 ']' 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3468630 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3468630 ']' 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3468630 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3468630 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3468630' 00:31:44.454 killing process with pid 3468630 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3468630 00:31:44.454 16:08:29 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3468630 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:31:48.640 00:31:48.640 real 1m56.346s 00:31:48.640 user 7m19.121s 00:31:48.640 sys 0m8.192s 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:48.640 ************************************ 00:31:48.640 END TEST nvmf_perf 00:31:48.640 ************************************ 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.640 ************************************ 00:31:48.640 START TEST nvmf_fio_host 00:31:48.640 ************************************ 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:31:48.640 * Looking for test storage... 00:31:48.640 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:48.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.640 --rc genhtml_branch_coverage=1 00:31:48.640 --rc genhtml_function_coverage=1 00:31:48.640 --rc genhtml_legend=1 00:31:48.640 --rc geninfo_all_blocks=1 00:31:48.640 --rc geninfo_unexecuted_blocks=1 00:31:48.640 00:31:48.640 ' 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:48.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.640 --rc genhtml_branch_coverage=1 00:31:48.640 --rc genhtml_function_coverage=1 00:31:48.640 --rc genhtml_legend=1 00:31:48.640 --rc geninfo_all_blocks=1 00:31:48.640 --rc geninfo_unexecuted_blocks=1 00:31:48.640 00:31:48.640 ' 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:48.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.640 --rc genhtml_branch_coverage=1 00:31:48.640 --rc genhtml_function_coverage=1 00:31:48.640 --rc genhtml_legend=1 00:31:48.640 --rc geninfo_all_blocks=1 00:31:48.640 --rc geninfo_unexecuted_blocks=1 00:31:48.640 00:31:48.640 ' 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:48.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.640 --rc genhtml_branch_coverage=1 00:31:48.640 --rc genhtml_function_coverage=1 00:31:48.640 --rc genhtml_legend=1 00:31:48.640 --rc geninfo_all_blocks=1 00:31:48.640 --rc geninfo_unexecuted_blocks=1 00:31:48.640 00:31:48.640 ' 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.640 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:48.641 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:48.641 16:08:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:31:55.206 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:31:55.206 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:31:55.206 Found net devices under 0000:d9:00.0: mlx_0_0 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:31:55.206 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:31:55.207 Found net devices under 0000:d9:00.1: mlx_0_1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # rdma_device_init 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:31:55.207 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:55.207 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:31:55.207 altname enp217s0f0np0 00:31:55.207 altname ens818f0np0 00:31:55.207 inet 192.168.100.8/24 scope global mlx_0_0 00:31:55.207 valid_lft forever preferred_lft forever 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:31:55.207 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:31:55.207 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:31:55.207 altname enp217s0f1np1 00:31:55.207 altname ens818f1np1 00:31:55.207 inet 192.168.100.9/24 scope global mlx_0_1 00:31:55.207 valid_lft forever preferred_lft forever 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:31:55.207 192.168.100.9' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:31:55.207 192.168.100.9' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # head -n 1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:31:55.207 192.168.100.9' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # tail -n +2 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # head -n 1 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:55.207 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3489958 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3489958 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3489958 ']' 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.208 16:08:40 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.208 [2024-11-17 16:08:40.650433] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:31:55.208 [2024-11-17 16:08:40.650539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.466 [2024-11-17 16:08:40.782413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:55.466 [2024-11-17 16:08:40.880106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.466 [2024-11-17 16:08:40.880158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.466 [2024-11-17 16:08:40.880171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.466 [2024-11-17 16:08:40.880184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.466 [2024-11-17 16:08:40.880195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.466 [2024-11-17 16:08:40.882767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.466 [2024-11-17 16:08:40.882843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.466 [2024-11-17 16:08:40.882902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.466 [2024-11-17 16:08:40.882911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:56.031 16:08:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.031 16:08:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:56.031 16:08:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:31:56.289 [2024-11-17 16:08:41.638824] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000029140/0x7f9e19f7d940) succeed. 00:31:56.289 [2024-11-17 16:08:41.648297] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6120000292c0/0x7f9e19f39940) succeed. 00:31:56.547 16:08:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:56.548 16:08:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.548 16:08:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.548 16:08:41 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:56.806 Malloc1 00:31:56.806 16:08:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:57.064 16:08:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:57.322 16:08:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:31:57.322 [2024-11-17 16:08:42.815764] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:31:57.322 16:08:42 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:57.580 16:08:43 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:31:58.146 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:58.146 fio-3.35 00:31:58.146 Starting 1 thread 00:32:00.677 00:32:00.677 test: (groupid=0, jobs=1): err= 0: pid=3490584: Sun Nov 17 16:08:45 2024 00:32:00.677 read: IOPS=15.4k, BW=60.2MiB/s (63.2MB/s)(121MiB/2004msec) 00:32:00.677 slat (nsec): min=1469, max=42066, avg=1641.18, stdev=569.64 00:32:00.677 clat (usec): min=3126, max=7620, avg=4130.21, stdev=127.29 00:32:00.677 lat (usec): min=3146, max=7621, avg=4131.85, stdev=127.26 00:32:00.677 clat percentiles (usec): 00:32:00.677 | 1.00th=[ 3720], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4113], 00:32:00.677 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4146], 00:32:00.677 | 70.00th=[ 4146], 80.00th=[ 4146], 90.00th=[ 4146], 95.00th=[ 4178], 00:32:00.677 | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 5997], 99.95th=[ 6980], 00:32:00.677 | 99.99th=[ 7570] 00:32:00.677 bw ( KiB/s): min=60680, max=62424, per=99.98%, avg=61674.00, stdev=884.89, samples=4 00:32:00.677 iops : min=15170, max=15606, avg=15418.50, stdev=221.22, samples=4 00:32:00.677 write: IOPS=15.4k, BW=60.3MiB/s (63.2MB/s)(121MiB/2004msec); 0 zone resets 00:32:00.677 slat (nsec): min=1520, max=41469, avg=1739.56, stdev=576.54 00:32:00.677 clat (usec): min=3141, max=7613, avg=4128.99, stdev=125.94 00:32:00.677 lat (usec): min=3162, max=7614, avg=4130.73, stdev=125.91 00:32:00.677 clat percentiles (usec): 00:32:00.677 | 1.00th=[ 3720], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4113], 00:32:00.677 | 30.00th=[ 4113], 40.00th=[ 4113], 50.00th=[ 4113], 60.00th=[ 4146], 00:32:00.677 | 70.00th=[ 4146], 80.00th=[ 4146], 90.00th=[ 4146], 95.00th=[ 4178], 00:32:00.677 | 99.00th=[ 4555], 99.50th=[ 4555], 99.90th=[ 5604], 99.95th=[ 6587], 00:32:00.677 | 99.99th=[ 7570] 00:32:00.677 bw ( KiB/s): min=61008, max=62576, per=100.00%, avg=61722.00, stdev=654.16, samples=4 00:32:00.677 iops : min=15252, max=15644, avg=15430.50, stdev=163.54, samples=4 00:32:00.677 lat (msec) : 4=2.17%, 10=97.83% 00:32:00.677 cpu : usr=99.40%, sys=0.25%, ctx=15, majf=0, minf=1291 00:32:00.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:00.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:00.677 issued rwts: total=30906,30919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:00.677 00:32:00.677 Run status group 0 (all jobs): 00:32:00.677 READ: bw=60.2MiB/s (63.2MB/s), 60.2MiB/s-60.2MiB/s (63.2MB/s-63.2MB/s), io=121MiB (127MB), run=2004-2004msec 00:32:00.677 WRITE: bw=60.3MiB/s (63.2MB/s), 60.3MiB/s-60.3MiB/s (63.2MB/s-63.2MB/s), io=121MiB (127MB), run=2004-2004msec 00:32:00.677 ----------------------------------------------------- 00:32:00.677 Suppressions used: 00:32:00.677 count bytes template 00:32:00.677 1 63 /usr/src/fio/parse.c 00:32:00.677 1 8 libtcmalloc_minimal.so 00:32:00.677 ----------------------------------------------------- 00:32:00.677 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:00.677 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:00.969 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:00.969 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:00.969 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:00.969 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:00.969 16:08:46 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:32:01.238 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:01.238 fio-3.35 00:32:01.238 Starting 1 thread 00:32:03.763 00:32:03.763 test: (groupid=0, jobs=1): err= 0: pid=3491194: Sun Nov 17 16:08:49 2024 00:32:03.763 read: IOPS=12.2k, BW=191MiB/s (200MB/s)(377MiB/1976msec) 00:32:03.763 slat (nsec): min=2681, max=44744, avg=3000.95, stdev=1033.68 00:32:03.763 clat (usec): min=515, max=9594, avg=1871.23, stdev=1517.92 00:32:03.764 lat (usec): min=518, max=9597, avg=1874.23, stdev=1518.26 00:32:03.764 clat percentiles (usec): 00:32:03.764 | 1.00th=[ 807], 5.00th=[ 922], 10.00th=[ 988], 20.00th=[ 1074], 00:32:03.764 | 30.00th=[ 1156], 40.00th=[ 1254], 50.00th=[ 1352], 60.00th=[ 1483], 00:32:03.764 | 70.00th=[ 1614], 80.00th=[ 1811], 90.00th=[ 4359], 95.00th=[ 5866], 00:32:03.764 | 99.00th=[ 7504], 99.50th=[ 8160], 99.90th=[ 8848], 99.95th=[ 8979], 00:32:03.764 | 99.99th=[ 9372] 00:32:03.764 bw ( KiB/s): min=93504, max=97376, per=48.86%, avg=95496.00, stdev=2051.73, samples=4 00:32:03.764 iops : min= 5844, max= 6086, avg=5968.50, stdev=128.23, samples=4 00:32:03.764 write: IOPS=6870, BW=107MiB/s (113MB/s)(194MiB/1809msec); 0 zone resets 00:32:03.764 slat (nsec): min=28940, max=90475, avg=31271.25, stdev=3679.65 00:32:03.764 clat (usec): min=5203, max=20951, avg=15101.99, stdev=2057.11 00:32:03.764 lat (usec): min=5232, max=20982, avg=15133.26, stdev=2057.07 00:32:03.764 clat percentiles (usec): 00:32:03.764 | 1.00th=[ 8848], 5.00th=[11731], 10.00th=[12518], 20.00th=[13566], 00:32:03.764 | 30.00th=[14222], 40.00th=[14615], 50.00th=[15139], 60.00th=[15664], 00:32:03.764 | 70.00th=[16188], 80.00th=[16909], 90.00th=[17695], 95.00th=[18220], 00:32:03.764 | 99.00th=[19792], 99.50th=[20055], 99.90th=[20579], 99.95th=[20579], 00:32:03.764 | 99.99th=[20841] 00:32:03.764 bw ( KiB/s): min=93696, max=101312, per=89.63%, avg=98536.00, stdev=3493.85, samples=4 00:32:03.764 iops : min= 5856, max= 6332, avg=6158.50, stdev=218.37, samples=4 00:32:03.764 lat (usec) : 750=0.21%, 1000=7.11% 00:32:03.764 lat (msec) : 2=48.87%, 4=2.68%, 10=7.69%, 20=33.19%, 50=0.25% 00:32:03.764 cpu : usr=95.86%, sys=2.49%, ctx=186, majf=0, minf=10430 00:32:03.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:32:03.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:03.764 issued rwts: total=24139,12429,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:03.764 00:32:03.764 Run status group 0 (all jobs): 00:32:03.764 READ: bw=191MiB/s (200MB/s), 191MiB/s-191MiB/s (200MB/s-200MB/s), io=377MiB (395MB), run=1976-1976msec 00:32:03.764 WRITE: bw=107MiB/s (113MB/s), 107MiB/s-107MiB/s (113MB/s-113MB/s), io=194MiB (204MB), run=1809-1809msec 00:32:03.764 ----------------------------------------------------- 00:32:03.764 Suppressions used: 00:32:03.764 count bytes template 00:32:03.764 1 63 /usr/src/fio/parse.c 00:32:03.764 238 22848 /usr/src/fio/iolog.c 00:32:03.764 1 8 libtcmalloc_minimal.so 00:32:03.764 ----------------------------------------------------- 00:32:03.764 00:32:03.764 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:04.020 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:04.020 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:04.020 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:04.020 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:04.020 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:04.020 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:04.020 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:04.020 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:04.277 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:04.277 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:d8:00.0 00:32:04.277 16:08:49 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:32:07.554 Nvme0n1 00:32:07.554 16:08:52 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:12.811 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=78453d32-9593-48d4-a05f-69e7adb0f856 00:32:12.811 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 78453d32-9593-48d4-a05f-69e7adb0f856 00:32:12.811 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=78453d32-9593-48d4-a05f-69e7adb0f856 00:32:12.811 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:12.811 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:12.811 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:12.811 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:13.068 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:13.068 { 00:32:13.068 "uuid": "78453d32-9593-48d4-a05f-69e7adb0f856", 00:32:13.068 "name": "lvs_0", 00:32:13.068 "base_bdev": "Nvme0n1", 00:32:13.068 "total_data_clusters": 1862, 00:32:13.068 "free_clusters": 1862, 00:32:13.068 "block_size": 512, 00:32:13.068 "cluster_size": 1073741824 00:32:13.068 } 00:32:13.068 ]' 00:32:13.068 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="78453d32-9593-48d4-a05f-69e7adb0f856") .free_clusters' 00:32:13.068 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1862 00:32:13.068 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="78453d32-9593-48d4-a05f-69e7adb0f856") .cluster_size' 00:32:13.068 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:13.068 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1906688 00:32:13.068 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1906688 00:32:13.068 1906688 00:32:13.068 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:32:13.633 c6c52689-2ac1-4b99-9c68-ddd8191599d1 00:32:13.633 16:08:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:13.889 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:13.889 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:14.146 16:08:59 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:14.711 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:14.711 fio-3.35 00:32:14.711 Starting 1 thread 00:32:17.238 00:32:17.238 test: (groupid=0, jobs=1): err= 0: pid=3493599: Sun Nov 17 16:09:02 2024 00:32:17.238 read: IOPS=8858, BW=34.6MiB/s (36.3MB/s)(69.4MiB/2005msec) 00:32:17.238 slat (nsec): min=1481, max=36005, avg=1712.17, stdev=534.30 00:32:17.238 clat (usec): min=201, max=333277, avg=7162.84, stdev=19690.29 00:32:17.238 lat (usec): min=203, max=333281, avg=7164.55, stdev=19690.35 00:32:17.238 clat percentiles (msec): 00:32:17.238 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:32:17.238 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:32:17.238 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 7], 95.00th=[ 7], 00:32:17.238 | 99.00th=[ 7], 99.50th=[ 9], 99.90th=[ 334], 99.95th=[ 334], 00:32:17.238 | 99.99th=[ 334] 00:32:17.238 bw ( KiB/s): min=13333, max=43016, per=99.90%, avg=35397.25, stdev=14712.12, samples=4 00:32:17.238 iops : min= 3333, max=10754, avg=8849.25, stdev=3678.16, samples=4 00:32:17.238 write: IOPS=8874, BW=34.7MiB/s (36.4MB/s)(69.5MiB/2005msec); 0 zone resets 00:32:17.238 slat (nsec): min=1528, max=17703, avg=1821.17, stdev=426.01 00:32:17.238 clat (usec): min=170, max=333721, avg=7132.01, stdev=19146.37 00:32:17.238 lat (usec): min=172, max=333727, avg=7133.83, stdev=19146.46 00:32:17.238 clat percentiles (msec): 00:32:17.238 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:32:17.238 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:32:17.238 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 7], 95.00th=[ 7], 00:32:17.238 | 99.00th=[ 7], 99.50th=[ 9], 99.90th=[ 334], 99.95th=[ 334], 00:32:17.238 | 99.99th=[ 334] 00:32:17.238 bw ( KiB/s): min=13804, max=42816, per=99.85%, avg=35445.00, stdev=14427.79, samples=4 00:32:17.238 iops : min= 3451, max=10704, avg=8861.25, stdev=3606.95, samples=4 00:32:17.238 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:17.238 lat (msec) : 2=0.03%, 4=0.16%, 10=99.37%, 20=0.04%, 500=0.36% 00:32:17.238 cpu : usr=99.25%, sys=0.30%, ctx=16, majf=0, minf=1667 00:32:17.238 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:17.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.238 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:17.238 issued rwts: total=17761,17794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.238 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:17.238 00:32:17.238 Run status group 0 (all jobs): 00:32:17.238 READ: bw=34.6MiB/s (36.3MB/s), 34.6MiB/s-34.6MiB/s (36.3MB/s-36.3MB/s), io=69.4MiB (72.7MB), run=2005-2005msec 00:32:17.238 WRITE: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=69.5MiB (72.9MB), run=2005-2005msec 00:32:17.238 ----------------------------------------------------- 00:32:17.238 Suppressions used: 00:32:17.238 count bytes template 00:32:17.238 1 64 /usr/src/fio/parse.c 00:32:17.238 1 8 libtcmalloc_minimal.so 00:32:17.238 ----------------------------------------------------- 00:32:17.238 00:32:17.238 16:09:02 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:17.497 16:09:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:18.869 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=9223b174-2f89-4937-80e5-ea18877a2f5b 00:32:18.870 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 9223b174-2f89-4937-80e5-ea18877a2f5b 00:32:18.870 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=9223b174-2f89-4937-80e5-ea18877a2f5b 00:32:18.870 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:18.870 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:18.870 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:18.870 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:19.127 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:19.128 { 00:32:19.128 "uuid": "78453d32-9593-48d4-a05f-69e7adb0f856", 00:32:19.128 "name": "lvs_0", 00:32:19.128 "base_bdev": "Nvme0n1", 00:32:19.128 "total_data_clusters": 1862, 00:32:19.128 "free_clusters": 0, 00:32:19.128 "block_size": 512, 00:32:19.128 "cluster_size": 1073741824 00:32:19.128 }, 00:32:19.128 { 00:32:19.128 "uuid": "9223b174-2f89-4937-80e5-ea18877a2f5b", 00:32:19.128 "name": "lvs_n_0", 00:32:19.128 "base_bdev": "c6c52689-2ac1-4b99-9c68-ddd8191599d1", 00:32:19.128 "total_data_clusters": 476206, 00:32:19.128 "free_clusters": 476206, 00:32:19.128 "block_size": 512, 00:32:19.128 "cluster_size": 4194304 00:32:19.128 } 00:32:19.128 ]' 00:32:19.128 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9223b174-2f89-4937-80e5-ea18877a2f5b") .free_clusters' 00:32:19.128 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=476206 00:32:19.128 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9223b174-2f89-4937-80e5-ea18877a2f5b") .cluster_size' 00:32:19.128 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:19.128 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1904824 00:32:19.128 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1904824 00:32:19.128 1904824 00:32:19.128 16:09:04 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:32:21.654 fa8f7cd6-7859-45fc-88de-411be41049a6 00:32:21.654 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:21.912 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:21.912 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:22.169 16:09:07 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:32:22.765 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:22.765 fio-3.35 00:32:22.765 Starting 1 thread 00:32:25.318 00:32:25.318 test: (groupid=0, jobs=1): err= 0: pid=3495576: Sun Nov 17 16:09:10 2024 00:32:25.318 read: IOPS=8968, BW=35.0MiB/s (36.7MB/s)(70.3MiB/2006msec) 00:32:25.318 slat (nsec): min=1485, max=36172, avg=1655.86, stdev=437.42 00:32:25.318 clat (usec): min=3796, max=12237, avg=7040.27, stdev=265.90 00:32:25.318 lat (usec): min=3802, max=12238, avg=7041.92, stdev=265.86 00:32:25.318 clat percentiles (usec): 00:32:25.318 | 1.00th=[ 6259], 5.00th=[ 6915], 10.00th=[ 6915], 20.00th=[ 6980], 00:32:25.318 | 30.00th=[ 6980], 40.00th=[ 6980], 50.00th=[ 7046], 60.00th=[ 7046], 00:32:25.318 | 70.00th=[ 7046], 80.00th=[ 7111], 90.00th=[ 7177], 95.00th=[ 7308], 00:32:25.318 | 99.00th=[ 7832], 99.50th=[ 7898], 99.90th=[ 9634], 99.95th=[11338], 00:32:25.318 | 99.99th=[12256] 00:32:25.318 bw ( KiB/s): min=34312, max=36816, per=99.95%, avg=35858.00, stdev=1116.54, samples=4 00:32:25.318 iops : min= 8578, max= 9204, avg=8964.50, stdev=279.13, samples=4 00:32:25.318 write: IOPS=8985, BW=35.1MiB/s (36.8MB/s)(70.4MiB/2006msec); 0 zone resets 00:32:25.318 slat (nsec): min=1534, max=18685, avg=1743.74, stdev=380.66 00:32:25.318 clat (usec): min=3800, max=12305, avg=7062.95, stdev=291.16 00:32:25.318 lat (usec): min=3804, max=12307, avg=7064.70, stdev=291.13 00:32:25.318 clat percentiles (usec): 00:32:25.318 | 1.00th=[ 6259], 5.00th=[ 6915], 10.00th=[ 6980], 20.00th=[ 6980], 00:32:25.318 | 30.00th=[ 7046], 40.00th=[ 7046], 50.00th=[ 7046], 60.00th=[ 7046], 00:32:25.318 | 70.00th=[ 7111], 80.00th=[ 7111], 90.00th=[ 7177], 95.00th=[ 7308], 00:32:25.318 | 99.00th=[ 7832], 99.50th=[ 7963], 99.90th=[11338], 99.95th=[12256], 00:32:25.318 | 99.99th=[12256] 00:32:25.319 bw ( KiB/s): min=35064, max=36456, per=99.94%, avg=35918.00, stdev=598.86, samples=4 00:32:25.319 iops : min= 8766, max= 9114, avg=8979.50, stdev=149.71, samples=4 00:32:25.319 lat (msec) : 4=0.05%, 10=99.82%, 20=0.13% 00:32:25.319 cpu : usr=99.30%, sys=0.30%, ctx=16, majf=0, minf=1794 00:32:25.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:25.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:25.319 issued rwts: total=17991,18024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:25.319 00:32:25.319 Run status group 0 (all jobs): 00:32:25.319 READ: bw=35.0MiB/s (36.7MB/s), 35.0MiB/s-35.0MiB/s (36.7MB/s-36.7MB/s), io=70.3MiB (73.7MB), run=2006-2006msec 00:32:25.319 WRITE: bw=35.1MiB/s (36.8MB/s), 35.1MiB/s-35.1MiB/s (36.8MB/s-36.8MB/s), io=70.4MiB (73.8MB), run=2006-2006msec 00:32:25.319 ----------------------------------------------------- 00:32:25.319 Suppressions used: 00:32:25.319 count bytes template 00:32:25.319 1 64 /usr/src/fio/parse.c 00:32:25.319 1 8 libtcmalloc_minimal.so 00:32:25.319 ----------------------------------------------------- 00:32:25.319 00:32:25.319 16:09:10 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:25.597 16:09:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:25.597 16:09:11 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:35.593 16:09:19 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:35.593 16:09:20 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:40.857 16:09:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:40.857 16:09:25 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:32:44.135 rmmod nvme_rdma 00:32:44.135 rmmod nvme_fabrics 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3489958 ']' 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3489958 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3489958 ']' 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3489958 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3489958 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3489958' 00:32:44.135 killing process with pid 3489958 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3489958 00:32:44.135 16:09:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3489958 00:32:45.510 16:09:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:45.510 16:09:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:32:45.510 00:32:45.510 real 0m57.419s 00:32:45.510 user 4m4.190s 00:32:45.510 sys 0m11.711s 00:32:45.510 16:09:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:45.510 16:09:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.510 ************************************ 00:32:45.510 END TEST nvmf_fio_host 00:32:45.510 ************************************ 00:32:45.510 16:09:31 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:32:45.510 16:09:31 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:45.510 16:09:31 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:45.510 16:09:31 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.510 ************************************ 00:32:45.510 START TEST nvmf_failover 00:32:45.510 ************************************ 00:32:45.510 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:32:45.770 * Looking for test storage... 00:32:45.770 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:45.770 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.771 --rc genhtml_branch_coverage=1 00:32:45.771 --rc genhtml_function_coverage=1 00:32:45.771 --rc genhtml_legend=1 00:32:45.771 --rc geninfo_all_blocks=1 00:32:45.771 --rc geninfo_unexecuted_blocks=1 00:32:45.771 00:32:45.771 ' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.771 --rc genhtml_branch_coverage=1 00:32:45.771 --rc genhtml_function_coverage=1 00:32:45.771 --rc genhtml_legend=1 00:32:45.771 --rc geninfo_all_blocks=1 00:32:45.771 --rc geninfo_unexecuted_blocks=1 00:32:45.771 00:32:45.771 ' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.771 --rc genhtml_branch_coverage=1 00:32:45.771 --rc genhtml_function_coverage=1 00:32:45.771 --rc genhtml_legend=1 00:32:45.771 --rc geninfo_all_blocks=1 00:32:45.771 --rc geninfo_unexecuted_blocks=1 00:32:45.771 00:32:45.771 ' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:45.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:45.771 --rc genhtml_branch_coverage=1 00:32:45.771 --rc genhtml_function_coverage=1 00:32:45.771 --rc genhtml_legend=1 00:32:45.771 --rc geninfo_all_blocks=1 00:32:45.771 --rc geninfo_unexecuted_blocks=1 00:32:45.771 00:32:45.771 ' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:45.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:45.771 16:09:31 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:32:52.357 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:32:52.357 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:32:52.357 Found net devices under 0000:d9:00.0: mlx_0_0 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:32:52.357 Found net devices under 0000:d9:00.1: mlx_0_1 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # rdma_device_init 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@530 -- # allocate_nic_ips 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:52.357 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:32:52.358 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:52.358 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:32:52.358 altname enp217s0f0np0 00:32:52.358 altname ens818f0np0 00:32:52.358 inet 192.168.100.8/24 scope global mlx_0_0 00:32:52.358 valid_lft forever preferred_lft forever 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:32:52.358 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:32:52.358 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:32:52.358 altname enp217s0f1np1 00:32:52.358 altname ens818f1np1 00:32:52.358 inet 192.168.100.9/24 scope global mlx_0_1 00:32:52.358 valid_lft forever preferred_lft forever 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:32:52.358 192.168.100.9' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # head -n 1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:32:52.358 192.168.100.9' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:32:52.358 192.168.100.9' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # tail -n +2 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # head -n 1 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3502453 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3502453 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3502453 ']' 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:52.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:52.358 16:09:37 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:52.617 [2024-11-17 16:09:37.906366] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:32:52.617 [2024-11-17 16:09:37.906466] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:52.617 [2024-11-17 16:09:38.039459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:52.617 [2024-11-17 16:09:38.140495] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:52.617 [2024-11-17 16:09:38.140542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:52.617 [2024-11-17 16:09:38.140555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:52.617 [2024-11-17 16:09:38.140572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:52.618 [2024-11-17 16:09:38.140582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:52.618 [2024-11-17 16:09:38.142761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:52.618 [2024-11-17 16:09:38.142820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.618 [2024-11-17 16:09:38.142827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:53.184 16:09:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:53.184 16:09:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:53.185 16:09:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:53.185 16:09:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:53.185 16:09:38 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:53.442 16:09:38 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:53.443 16:09:38 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:32:53.443 [2024-11-17 16:09:38.960741] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f5f1a384940) succeed. 00:32:53.443 [2024-11-17 16:09:38.970158] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f5f1a340940) succeed. 00:32:53.701 16:09:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:53.958 Malloc0 00:32:53.958 16:09:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:54.216 16:09:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:54.474 16:09:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:54.474 [2024-11-17 16:09:40.010694] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:32:54.732 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:32:54.732 [2024-11-17 16:09:40.219276] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:32:54.732 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:32:54.990 [2024-11-17 16:09:40.407929] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:32:54.990 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:54.990 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3502983 00:32:54.990 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:54.990 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3502983 /var/tmp/bdevperf.sock 00:32:54.990 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3502983 ']' 00:32:54.990 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:54.990 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.990 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:54.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:54.990 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.990 16:09:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:55.925 16:09:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.925 16:09:41 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:55.925 16:09:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:56.183 NVMe0n1 00:32:56.183 16:09:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:56.442 00:32:56.442 16:09:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3503175 00:32:56.442 16:09:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:56.442 16:09:41 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:57.377 16:09:42 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:32:57.636 16:09:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:33:00.916 16:09:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:00.916 00:33:00.916 16:09:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:01.174 [2024-11-17 16:09:46.504553] rdma.c:4674:nvmf_rdma_log_wc_status: *ERROR*: Error on CQ 0x616000021980, (qp state 5, in_error 1) request 0x35184862948680, type RECV, status: (16): aborted error 00:33:01.174 16:09:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:33:04.456 16:09:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:04.456 [2024-11-17 16:09:49.710516] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:04.456 16:09:49 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:33:05.390 16:09:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:33:05.648 16:09:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3503175 00:33:12.210 { 00:33:12.210 "results": [ 00:33:12.210 { 00:33:12.210 "job": "NVMe0n1", 00:33:12.210 "core_mask": "0x1", 00:33:12.210 "workload": "verify", 00:33:12.210 "status": "finished", 00:33:12.210 "verify_range": { 00:33:12.210 "start": 0, 00:33:12.210 "length": 16384 00:33:12.210 }, 00:33:12.210 "queue_depth": 128, 00:33:12.210 "io_size": 4096, 00:33:12.210 "runtime": 15.005642, 00:33:12.210 "iops": 12273.11700492388, 00:33:12.210 "mibps": 47.94186330048391, 00:33:12.210 "io_failed": 4220, 00:33:12.210 "io_timeout": 0, 00:33:12.210 "avg_latency_us": 10173.098869597528, 00:33:12.210 "min_latency_us": 589.824, 00:33:12.210 "max_latency_us": 1026765.6192 00:33:12.210 } 00:33:12.210 ], 00:33:12.210 "core_count": 1 00:33:12.210 } 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3502983 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3502983 ']' 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3502983 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3502983 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3502983' 00:33:12.210 killing process with pid 3502983 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3502983 00:33:12.210 16:09:57 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3502983 00:33:12.783 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:12.783 [2024-11-17 16:09:40.485531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:12.783 [2024-11-17 16:09:40.485638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502983 ] 00:33:12.783 [2024-11-17 16:09:40.618342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.783 [2024-11-17 16:09:40.722487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.783 Running I/O for 15 seconds... 00:33:12.783 15488.00 IOPS, 60.50 MiB/s [2024-11-17T15:09:58.327Z] 8384.50 IOPS, 32.75 MiB/s [2024-11-17T15:09:58.327Z] [2024-11-17 16:09:44.043748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.043806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.043837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.043854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.043870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.043885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.043900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.043914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.043929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.043946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.043961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.043976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.043991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.783 [2024-11-17 16:09:44.044528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x182500 00:33:12.783 [2024-11-17 16:09:44.044542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b1000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043af000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ad000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.044984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.044998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ab000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a9000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a7000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a5000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:3472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.784 [2024-11-17 16:09:44.045613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182500 00:33:12.784 [2024-11-17 16:09:44.045627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.045982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.045996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.785 [2024-11-17 16:09:44.046695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x182500 00:33:12.785 [2024-11-17 16:09:44.046710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.046724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.046743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.046757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.046774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.046788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.046802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.046817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.046833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.046847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.046861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.046875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.046889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.046905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.046919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.046933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.046947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.046961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.046975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.046989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.047510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x182500 00:33:12.786 [2024-11-17 16:09:44.047531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.049519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:12.786 [2024-11-17 16:09:44.049543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:12.786 [2024-11-17 16:09:44.049560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:8 PRP1 0x0 PRP2 0x0 00:33:12.786 [2024-11-17 16:09:44.049581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:44.049752] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:33:12.786 [2024-11-17 16:09:44.049771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:33:12.786 [2024-11-17 16:09:44.052856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:12.786 [2024-11-17 16:09:44.080612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:33:12.786 [2024-11-17 16:09:44.119073] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:33:12.786 9924.33 IOPS, 38.77 MiB/s [2024-11-17T15:09:58.330Z] 11339.25 IOPS, 44.29 MiB/s [2024-11-17T15:09:58.330Z] 10732.40 IOPS, 41.92 MiB/s [2024-11-17T15:09:58.330Z] [2024-11-17 16:09:47.516997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182400 00:33:12.786 [2024-11-17 16:09:47.517051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:47.517086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182400 00:33:12.786 [2024-11-17 16:09:47.517100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:47.517118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x182400 00:33:12.786 [2024-11-17 16:09:47.517132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:47.517149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.786 [2024-11-17 16:09:47.517162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.786 [2024-11-17 16:09:47.517179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.786 [2024-11-17 16:09:47.517192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.787 [2024-11-17 16:09:47.517859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.517975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.517992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.518005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.518023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.518035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.518054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.518066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.518083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.518095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.518112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x182400 00:33:12.787 [2024-11-17 16:09:47.518126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.787 [2024-11-17 16:09:47.518142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435f000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:46808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.518972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.518984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.519004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.519017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.519034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182400 00:33:12.788 [2024-11-17 16:09:47.519046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.519062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.788 [2024-11-17 16:09:47.519075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.519092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.788 [2024-11-17 16:09:47.519105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.519121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.788 [2024-11-17 16:09:47.519133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.519149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.788 [2024-11-17 16:09:47.519161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.519177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.788 [2024-11-17 16:09:47.519189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.519205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.788 [2024-11-17 16:09:47.519217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.519239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.788 [2024-11-17 16:09:47.519251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.788 [2024-11-17 16:09:47.519267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.519279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:46888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.519541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.519573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.519602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.519631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.519659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.519688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.519720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.519751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.519983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.519999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.520011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.520041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.520071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.520101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.520129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.520158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.520192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182400 00:33:12.789 [2024-11-17 16:09:47.520221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.520249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.520278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.520306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.520336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.789 [2024-11-17 16:09:47.520364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.789 [2024-11-17 16:09:47.520380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.520767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:47.520780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.522806] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:12.790 [2024-11-17 16:09:47.522825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:12.790 [2024-11-17 16:09:47.522839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47456 len:8 PRP1 0x0 PRP2 0x0 00:33:12.790 [2024-11-17 16:09:47.522853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:47.523051] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:33:12.790 [2024-11-17 16:09:47.523068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:33:12.790 [2024-11-17 16:09:47.526131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:33:12.790 [2024-11-17 16:09:47.554469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] CQ transport error -6 (No such device or address) on qpair id 0 00:33:12.790 [2024-11-17 16:09:47.598311] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:33:12.790 9894.50 IOPS, 38.65 MiB/s [2024-11-17T15:09:58.334Z] 10731.00 IOPS, 41.92 MiB/s [2024-11-17T15:09:58.334Z] 11357.62 IOPS, 44.37 MiB/s [2024-11-17T15:09:58.334Z] 11732.22 IOPS, 45.83 MiB/s [2024-11-17T15:09:58.334Z] [2024-11-17 16:09:51.922938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:51.922991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:51.923035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:81624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.790 [2024-11-17 16:09:51.923062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:81048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431f000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433b000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004339000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004337000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004335000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004333000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004331000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.790 [2024-11-17 16:09:51.923486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x182500 00:33:12.790 [2024-11-17 16:09:51.923499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.923526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.923552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.923589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.923615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.923642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.923668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.923694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431d000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.923720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.923745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:81640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.923772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.923798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:81656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.923824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.923852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.923877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.923903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:81688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.923929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.923954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.923979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.923994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004361000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433d000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.924162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:81704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.924188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.924213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.924239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.924264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.924289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.924316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.924341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.791 [2024-11-17 16:09:51.924367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004341000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.791 [2024-11-17 16:09:51.924490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:81280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x182500 00:33:12.791 [2024-11-17 16:09:51.924503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.924528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.924555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.924588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:81768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.924614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.924640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.924665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:81792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.924691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.924716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.924743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:81816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.924768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.924794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:81312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000042ff000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.924820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433f000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.924846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:81328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432b000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.924872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004329000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.924897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004327000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.924923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:81352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004325000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.924949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004323000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.924975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.924990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004321000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.925002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.925028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:81840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.925054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.925079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.925106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.925131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.925156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.925182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.792 [2024-11-17 16:09:51.925206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430f000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.925232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432d000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.925258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004301000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.925283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004303000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.925309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004305000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.925334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004307000 len:0x1000 key:0x182500 00:33:12.792 [2024-11-17 16:09:51.925360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.792 [2024-11-17 16:09:51.925374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004309000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.925386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430b000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.925414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:12.793 [2024-11-17 16:09:51.925801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430d000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.925827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.925854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.925879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.925905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.925930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.925958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.925984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.925998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432f000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004311000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004313000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004315000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:81536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004317000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004319000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431b000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.926312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182500 00:33:12.793 [2024-11-17 16:09:51.926324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.928402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:12.793 [2024-11-17 16:09:51.928422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:12.793 [2024-11-17 16:09:51.928436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81600 len:8 PRP1 0x0 PRP2 0x0 00:33:12.793 [2024-11-17 16:09:51.928449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:12.793 [2024-11-17 16:09:51.928638] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:33:12.794 [2024-11-17 16:09:51.928655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:33:12.794 [2024-11-17 16:09:51.931722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:33:12.794 [2024-11-17 16:09:51.959598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] CQ transport error -6 (No such device or address) on qpair id 0 00:33:12.794 10559.00 IOPS, 41.25 MiB/s [2024-11-17T15:09:58.338Z] [2024-11-17 16:09:52.000857] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:33:12.794 11000.00 IOPS, 42.97 MiB/s [2024-11-17T15:09:58.338Z] 11400.50 IOPS, 44.53 MiB/s [2024-11-17T15:09:58.338Z] 11736.15 IOPS, 45.84 MiB/s [2024-11-17T15:09:58.338Z] 12026.57 IOPS, 46.98 MiB/s 00:33:12.794 Latency(us) 00:33:12.794 [2024-11-17T15:09:58.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.794 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:12.794 Verification LBA range: start 0x0 length 0x4000 00:33:12.794 NVMe0n1 : 15.01 12273.12 47.94 281.23 0.00 10173.10 589.82 1026765.62 00:33:12.794 [2024-11-17T15:09:58.338Z] =================================================================================================================== 00:33:12.794 [2024-11-17T15:09:58.338Z] Total : 12273.12 47.94 281.23 0.00 10173.10 589.82 1026765.62 00:33:12.794 Received shutdown signal, test time was about 15.000000 seconds 00:33:12.794 00:33:12.794 Latency(us) 00:33:12.794 [2024-11-17T15:09:58.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.794 [2024-11-17T15:09:58.338Z] =================================================================================================================== 00:33:12.794 [2024-11-17T15:09:58.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3505871 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3505871 /var/tmp/bdevperf.sock 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3505871 ']' 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:12.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.794 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:13.736 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:13.736 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:33:13.736 16:09:58 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:13.736 [2024-11-17 16:09:59.139905] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:33:13.736 16:09:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:33:13.995 [2024-11-17 16:09:59.336645] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:33:13.995 16:09:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:14.253 NVMe0n1 00:33:14.253 16:09:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:14.511 00:33:14.511 16:09:59 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:33:14.769 00:33:14.769 16:10:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:14.769 16:10:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:33:15.027 16:10:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:15.027 16:10:00 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:18.343 16:10:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:18.343 16:10:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:18.343 16:10:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:18.343 16:10:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3506755 00:33:18.343 16:10:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3506755 00:33:19.718 { 00:33:19.718 "results": [ 00:33:19.718 { 00:33:19.718 "job": "NVMe0n1", 00:33:19.718 "core_mask": "0x1", 00:33:19.718 "workload": "verify", 00:33:19.718 "status": "finished", 00:33:19.718 "verify_range": { 00:33:19.718 "start": 0, 00:33:19.718 "length": 16384 00:33:19.718 }, 00:33:19.718 "queue_depth": 128, 00:33:19.718 "io_size": 4096, 00:33:19.718 "runtime": 1.007463, 00:33:19.718 "iops": 15500.321103603805, 00:33:19.718 "mibps": 60.54812931095236, 00:33:19.718 "io_failed": 0, 00:33:19.718 "io_timeout": 0, 00:33:19.718 "avg_latency_us": 8213.272340983607, 00:33:19.718 "min_latency_us": 2804.9408, 00:33:19.718 "max_latency_us": 15099.4944 00:33:19.718 } 00:33:19.718 ], 00:33:19.718 "core_count": 1 00:33:19.718 } 00:33:19.718 16:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:19.718 [2024-11-17 16:09:58.167531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:19.718 [2024-11-17 16:09:58.167641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505871 ] 00:33:19.718 [2024-11-17 16:09:58.301469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.718 [2024-11-17 16:09:58.403885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.718 [2024-11-17 16:10:00.532780] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:33:19.718 [2024-11-17 16:10:00.533436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:19.718 [2024-11-17 16:10:00.533496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:19.718 [2024-11-17 16:10:00.571177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] CQ transport error -6 (No such device or address) on qpair id 0 00:33:19.718 [2024-11-17 16:10:00.595768] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:19.718 Running I/O for 1 seconds... 00:33:19.718 15488.00 IOPS, 60.50 MiB/s 00:33:19.718 Latency(us) 00:33:19.718 [2024-11-17T15:10:05.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.718 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:19.718 Verification LBA range: start 0x0 length 0x4000 00:33:19.718 NVMe0n1 : 1.01 15500.32 60.55 0.00 0.00 8213.27 2804.94 15099.49 00:33:19.718 [2024-11-17T15:10:05.262Z] =================================================================================================================== 00:33:19.718 [2024-11-17T15:10:05.262Z] Total : 15500.32 60.55 0.00 0.00 8213.27 2804.94 15099.49 00:33:19.718 16:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:19.718 16:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:19.718 16:10:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:19.977 16:10:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:19.977 16:10:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:19.977 16:10:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:20.236 16:10:05 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3505871 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3505871 ']' 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3505871 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3505871 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3505871' 00:33:23.521 killing process with pid 3505871 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3505871 00:33:23.521 16:10:08 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3505871 00:33:24.456 16:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:24.456 16:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:24.456 16:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:24.456 16:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:24.456 16:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:24.456 16:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:24.456 16:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:24.715 16:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:33:24.715 rmmod nvme_rdma 00:33:24.715 rmmod nvme_fabrics 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3502453 ']' 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3502453 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3502453 ']' 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3502453 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3502453 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3502453' 00:33:24.715 killing process with pid 3502453 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3502453 00:33:24.715 16:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3502453 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:33:26.618 00:33:26.618 real 0m40.735s 00:33:26.618 user 2m15.025s 00:33:26.618 sys 0m7.887s 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:26.618 ************************************ 00:33:26.618 END TEST nvmf_failover 00:33:26.618 ************************************ 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.618 ************************************ 00:33:26.618 START TEST nvmf_host_discovery 00:33:26.618 ************************************ 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:33:26.618 * Looking for test storage... 00:33:26.618 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:26.618 16:10:11 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:33:26.618 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:26.618 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:26.618 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:26.618 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:26.618 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:26.618 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:26.618 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:26.618 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.619 --rc genhtml_branch_coverage=1 00:33:26.619 --rc genhtml_function_coverage=1 00:33:26.619 --rc genhtml_legend=1 00:33:26.619 --rc geninfo_all_blocks=1 00:33:26.619 --rc geninfo_unexecuted_blocks=1 00:33:26.619 00:33:26.619 ' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.619 --rc genhtml_branch_coverage=1 00:33:26.619 --rc genhtml_function_coverage=1 00:33:26.619 --rc genhtml_legend=1 00:33:26.619 --rc geninfo_all_blocks=1 00:33:26.619 --rc geninfo_unexecuted_blocks=1 00:33:26.619 00:33:26.619 ' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.619 --rc genhtml_branch_coverage=1 00:33:26.619 --rc genhtml_function_coverage=1 00:33:26.619 --rc genhtml_legend=1 00:33:26.619 --rc geninfo_all_blocks=1 00:33:26.619 --rc geninfo_unexecuted_blocks=1 00:33:26.619 00:33:26.619 ' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.619 --rc genhtml_branch_coverage=1 00:33:26.619 --rc genhtml_function_coverage=1 00:33:26.619 --rc genhtml_legend=1 00:33:26.619 --rc geninfo_all_blocks=1 00:33:26.619 --rc geninfo_unexecuted_blocks=1 00:33:26.619 00:33:26.619 ' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:26.619 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:33:26.619 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:33:26.619 00:33:26.619 real 0m0.215s 00:33:26.619 user 0m0.135s 00:33:26.619 sys 0m0.094s 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:26.619 ************************************ 00:33:26.619 END TEST nvmf_host_discovery 00:33:26.619 ************************************ 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:26.619 16:10:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.619 ************************************ 00:33:26.619 START TEST nvmf_host_multipath_status 00:33:26.619 ************************************ 00:33:26.620 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:33:26.879 * Looking for test storage... 00:33:26.879 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:26.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.879 --rc genhtml_branch_coverage=1 00:33:26.879 --rc genhtml_function_coverage=1 00:33:26.879 --rc genhtml_legend=1 00:33:26.879 --rc geninfo_all_blocks=1 00:33:26.879 --rc geninfo_unexecuted_blocks=1 00:33:26.879 00:33:26.879 ' 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:26.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.879 --rc genhtml_branch_coverage=1 00:33:26.879 --rc genhtml_function_coverage=1 00:33:26.879 --rc genhtml_legend=1 00:33:26.879 --rc geninfo_all_blocks=1 00:33:26.879 --rc geninfo_unexecuted_blocks=1 00:33:26.879 00:33:26.879 ' 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:26.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.879 --rc genhtml_branch_coverage=1 00:33:26.879 --rc genhtml_function_coverage=1 00:33:26.879 --rc genhtml_legend=1 00:33:26.879 --rc geninfo_all_blocks=1 00:33:26.879 --rc geninfo_unexecuted_blocks=1 00:33:26.879 00:33:26.879 ' 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:26.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:26.879 --rc genhtml_branch_coverage=1 00:33:26.879 --rc genhtml_function_coverage=1 00:33:26.879 --rc genhtml_legend=1 00:33:26.879 --rc geninfo_all_blocks=1 00:33:26.879 --rc geninfo_unexecuted_blocks=1 00:33:26.879 00:33:26.879 ' 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.879 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:26.880 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:26.880 16:10:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:33:33.443 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:33:33.443 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:33:33.443 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:33:33.444 Found net devices under 0000:d9:00.0: mlx_0_0 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:33:33.444 Found net devices under 0000:d9:00.1: mlx_0_1 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # rdma_device_init 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@530 -- # allocate_nic_ips 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:33:33.444 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:33.444 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:33:33.444 altname enp217s0f0np0 00:33:33.444 altname ens818f0np0 00:33:33.444 inet 192.168.100.8/24 scope global mlx_0_0 00:33:33.444 valid_lft forever preferred_lft forever 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:33:33.444 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:33:33.444 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:33:33.444 altname enp217s0f1np1 00:33:33.444 altname ens818f1np1 00:33:33.444 inet 192.168.100.9/24 scope global mlx_0_1 00:33:33.444 valid_lft forever preferred_lft forever 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:33:33.444 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:33:33.445 192.168.100.9' 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:33:33.445 192.168.100.9' 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # head -n 1 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # tail -n +2 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # head -n 1 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:33:33.445 192.168.100.9' 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3511415 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3511415 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3511415 ']' 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:33.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:33.445 16:10:18 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:33.703 [2024-11-17 16:10:19.017954] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:33:33.703 [2024-11-17 16:10:19.018047] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:33.703 [2024-11-17 16:10:19.149919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:33.962 [2024-11-17 16:10:19.246478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:33.962 [2024-11-17 16:10:19.246523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:33.962 [2024-11-17 16:10:19.246536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:33.962 [2024-11-17 16:10:19.246550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:33.962 [2024-11-17 16:10:19.246560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:33.962 [2024-11-17 16:10:19.248730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:33.962 [2024-11-17 16:10:19.248738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.529 16:10:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:34.529 16:10:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:34.529 16:10:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:34.529 16:10:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:34.529 16:10:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:34.529 16:10:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:34.529 16:10:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3511415 00:33:34.529 16:10:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:33:34.529 [2024-11-17 16:10:20.049051] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028b40/0x7f1bcad61940) succeed. 00:33:34.529 [2024-11-17 16:10:20.058447] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028cc0/0x7f1bcad1d940) succeed. 00:33:34.787 16:10:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:35.045 Malloc0 00:33:35.045 16:10:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:35.304 16:10:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:35.561 16:10:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:33:35.561 [2024-11-17 16:10:21.017762] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:33:35.561 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:33:35.819 [2024-11-17 16:10:21.202082] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:33:35.819 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:35.819 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3511862 00:33:35.819 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:35.819 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3511862 /var/tmp/bdevperf.sock 00:33:35.819 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3511862 ']' 00:33:35.819 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:35.819 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:35.819 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:35.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:35.819 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:35.819 16:10:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:36.754 16:10:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.754 16:10:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:36.754 16:10:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:37.012 16:10:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:37.270 Nvme0n1 00:33:37.270 16:10:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:37.527 Nvme0n1 00:33:37.527 16:10:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:37.527 16:10:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:39.427 16:10:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:39.427 16:10:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:33:39.685 16:10:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:39.685 16:10:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:41.060 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.061 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:41.319 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.319 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:41.319 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.319 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:41.578 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.578 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:41.578 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.578 16:10:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:41.837 16:10:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.837 16:10:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:41.837 16:10:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.837 16:10:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:41.837 16:10:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:41.837 16:10:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:41.837 16:10:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:42.095 16:10:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:42.353 16:10:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:43.288 16:10:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:43.288 16:10:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:43.288 16:10:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.288 16:10:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:43.547 16:10:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:43.547 16:10:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:43.547 16:10:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.547 16:10:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:43.814 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.814 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:43.814 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.814 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:43.814 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.814 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:44.072 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.072 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:44.072 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.072 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:44.072 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.072 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:44.330 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.330 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:44.330 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:44.330 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.588 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.588 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:44.588 16:10:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:44.846 16:10:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:33:44.846 16:10:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:46.220 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:46.220 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:46.220 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.221 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:46.221 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.221 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:46.221 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.221 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:46.221 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:46.221 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:46.221 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.221 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:46.479 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.479 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:46.479 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.479 16:10:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:46.737 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.737 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:46.737 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.737 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:46.995 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.995 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:46.995 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.995 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:46.995 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.995 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:46.995 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:47.253 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:33:47.511 16:10:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:48.445 16:10:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:48.445 16:10:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:48.445 16:10:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.445 16:10:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:48.703 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.703 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:48.703 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.703 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:48.961 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.961 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:48.961 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.961 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:48.961 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.961 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:48.961 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.961 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:49.219 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.219 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:49.219 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.219 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:49.477 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.477 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:49.477 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.477 16:10:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:49.735 16:10:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:49.735 16:10:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:49.735 16:10:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:33:49.735 16:10:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:33:49.993 16:10:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:50.926 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:50.926 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:50.926 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.926 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:51.185 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.185 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:51.185 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.185 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:51.442 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.442 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:51.442 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.442 16:10:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:51.699 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.699 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:51.699 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:51.699 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.699 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.699 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:51.699 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.957 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:51.957 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.957 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:51.957 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:51.957 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.214 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:52.214 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:52.214 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:33:52.471 16:10:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:52.471 16:10:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:53.848 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:53.848 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:53.848 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.848 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:53.848 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:53.848 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:53.848 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.848 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:54.175 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.175 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:54.175 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.175 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:54.175 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.175 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:54.175 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.175 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:54.449 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.449 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:54.449 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.449 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:54.449 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:54.449 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:54.449 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.449 16:10:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:54.720 16:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.720 16:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:54.978 16:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:54.978 16:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:33:55.237 16:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:55.237 16:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:56.612 16:10:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:56.612 16:10:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:56.612 16:10:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.612 16:10:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:56.612 16:10:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.612 16:10:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:56.612 16:10:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.612 16:10:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:56.871 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.871 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:56.871 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:56.871 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.871 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.871 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:56.871 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.871 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:57.129 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.129 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:57.129 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.130 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:57.387 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.387 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:57.387 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:57.387 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:57.646 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.646 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:57.646 16:10:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:33:57.646 16:10:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:33:57.903 16:10:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:58.836 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:58.836 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:58.836 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.836 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:59.094 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:59.094 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:59.094 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.094 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:59.353 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.353 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:59.353 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.353 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:59.611 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.611 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:59.611 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.611 16:10:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:59.611 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.611 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:59.611 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.611 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:59.869 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.869 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:59.869 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:59.869 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:00.127 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.127 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:34:00.127 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:00.385 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:34:00.385 16:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:34:01.758 16:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:34:01.758 16:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:01.758 16:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.758 16:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:01.758 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.758 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:34:01.758 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:01.758 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.758 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.758 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:01.758 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.758 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:02.016 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.016 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:02.016 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.016 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:02.274 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.274 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:02.274 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.274 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:02.532 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.532 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:34:02.532 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.532 16:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:02.532 16:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.532 16:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:02.532 16:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:34:02.789 16:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:34:03.048 16:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:03.981 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:03.981 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:03.981 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:03.981 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:04.238 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.238 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:04.238 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.238 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:04.496 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:04.496 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:04.496 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.496 16:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:04.496 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.496 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:04.496 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.496 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:04.754 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:04.754 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:04.754 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:04.754 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:05.012 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:05.012 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:05.012 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:05.012 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:05.269 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:05.269 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3511862 00:34:05.269 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3511862 ']' 00:34:05.269 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3511862 00:34:05.269 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:05.270 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:05.270 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3511862 00:34:05.270 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:05.270 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:05.270 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3511862' 00:34:05.270 killing process with pid 3511862 00:34:05.270 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3511862 00:34:05.270 16:10:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3511862 00:34:05.270 { 00:34:05.270 "results": [ 00:34:05.270 { 00:34:05.270 "job": "Nvme0n1", 00:34:05.270 "core_mask": "0x4", 00:34:05.270 "workload": "verify", 00:34:05.270 "status": "terminated", 00:34:05.270 "verify_range": { 00:34:05.270 "start": 0, 00:34:05.270 "length": 16384 00:34:05.270 }, 00:34:05.270 "queue_depth": 128, 00:34:05.270 "io_size": 4096, 00:34:05.270 "runtime": 27.692998, 00:34:05.270 "iops": 14064.999390820742, 00:34:05.270 "mibps": 54.94140387039352, 00:34:05.270 "io_failed": 0, 00:34:05.270 "io_timeout": 0, 00:34:05.270 "avg_latency_us": 9079.117258102911, 00:34:05.270 "min_latency_us": 1690.8288, 00:34:05.270 "max_latency_us": 3019898.88 00:34:05.270 } 00:34:05.270 ], 00:34:05.270 "core_count": 1 00:34:05.270 } 00:34:06.207 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3511862 00:34:06.207 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:06.207 [2024-11-17 16:10:21.287845] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:34:06.207 [2024-11-17 16:10:21.287943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3511862 ] 00:34:06.207 [2024-11-17 16:10:21.416511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:06.207 [2024-11-17 16:10:21.518109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:06.207 Running I/O for 90 seconds... 00:34:06.207 16068.00 IOPS, 62.77 MiB/s [2024-11-17T15:10:51.751Z] 16242.00 IOPS, 63.45 MiB/s [2024-11-17T15:10:51.751Z] 16298.67 IOPS, 63.67 MiB/s [2024-11-17T15:10:51.751Z] 16320.00 IOPS, 63.75 MiB/s [2024-11-17T15:10:51.751Z] 16333.60 IOPS, 63.80 MiB/s [2024-11-17T15:10:51.751Z] 16351.33 IOPS, 63.87 MiB/s [2024-11-17T15:10:51.751Z] 16357.14 IOPS, 63.90 MiB/s [2024-11-17T15:10:51.751Z] 16328.38 IOPS, 63.78 MiB/s [2024-11-17T15:10:51.751Z] 16323.78 IOPS, 63.76 MiB/s [2024-11-17T15:10:51.751Z] 16317.40 IOPS, 63.74 MiB/s [2024-11-17T15:10:51.751Z] 16320.27 IOPS, 63.75 MiB/s [2024-11-17T15:10:51.751Z] 16312.25 IOPS, 63.72 MiB/s [2024-11-17T15:10:51.751Z] [2024-11-17 16:10:35.216690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.216755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.216811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.216829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.216847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437f000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.216862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.216878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.216897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.216912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.216931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.216948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.216963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.216979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.216995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.217027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.217063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.217096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.217127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.217160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.217196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.217228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:27392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.217257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182900 00:34:06.207 [2024-11-17 16:10:35.217288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:27824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:27832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:27864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:27872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:27888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.207 [2024-11-17 16:10:35.217628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:06.207 [2024-11-17 16:10:35.217644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:27408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434d000 len:0x1000 key:0x182900 00:34:06.208 [2024-11-17 16:10:35.217659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438d000 len:0x1000 key:0x182900 00:34:06.208 [2024-11-17 16:10:35.217692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:27904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.217732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.217760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:27920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.217793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.217823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:27936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.217853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.217881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:27952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.217910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.217941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:27968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.217969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.217984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.217998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:27992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:28032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:28096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.208 [2024-11-17 16:10:35.218533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434f000 len:0x1000 key:0x182900 00:34:06.208 [2024-11-17 16:10:35.218570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:27432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004351000 len:0x1000 key:0x182900 00:34:06.208 [2024-11-17 16:10:35.218600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182900 00:34:06.208 [2024-11-17 16:10:35.218629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182900 00:34:06.208 [2024-11-17 16:10:35.218660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004357000 len:0x1000 key:0x182900 00:34:06.208 [2024-11-17 16:10:35.218691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004359000 len:0x1000 key:0x182900 00:34:06.208 [2024-11-17 16:10:35.218720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435b000 len:0x1000 key:0x182900 00:34:06.208 [2024-11-17 16:10:35.218749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435d000 len:0x1000 key:0x182900 00:34:06.208 [2024-11-17 16:10:35.218780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.208 [2024-11-17 16:10:35.218799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.218813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.218828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.218842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.218857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004363000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.218887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.218904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004365000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.218922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.218937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:27520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.218952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.218967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.218981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.218998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:27536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436d000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:27552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004371000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004373000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:27576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004379000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004389000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437d000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:27624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438b000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:27640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.209 [2024-11-17 16:10:35.219434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.209 [2024-11-17 16:10:35.219463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.209 [2024-11-17 16:10:35.219492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.209 [2024-11-17 16:10:35.219522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.209 [2024-11-17 16:10:35.219555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.209 [2024-11-17 16:10:35.219589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.209 [2024-11-17 16:10:35.219620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:27648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434b000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.219967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:27664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.219988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.220010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.220025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.220045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.220060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.220081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.220095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.220115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004377000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.220130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.220150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438f000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.220164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.220184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004391000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.220200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.209 [2024-11-17 16:10:35.220220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004393000 len:0x1000 key:0x182900 00:34:06.209 [2024-11-17 16:10:35.220237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004397000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439f000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:27776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.220519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.220553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a3000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a1000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:35.220706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.220740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.220777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.220811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.220845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.220878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.220912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.220946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.220966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.220980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.221000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.221014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.221033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.221050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.221069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.221086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.221107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.221122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:35.221142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:35.221156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:06.210 15386.00 IOPS, 60.10 MiB/s [2024-11-17T15:10:51.754Z] 14287.00 IOPS, 55.81 MiB/s [2024-11-17T15:10:51.754Z] 13334.53 IOPS, 52.09 MiB/s [2024-11-17T15:10:51.754Z] 13255.25 IOPS, 51.78 MiB/s [2024-11-17T15:10:51.754Z] 13440.88 IOPS, 52.50 MiB/s [2024-11-17T15:10:51.754Z] 13537.61 IOPS, 52.88 MiB/s [2024-11-17T15:10:51.754Z] 13558.05 IOPS, 52.96 MiB/s [2024-11-17T15:10:51.754Z] 13562.15 IOPS, 52.98 MiB/s [2024-11-17T15:10:51.754Z] 13676.67 IOPS, 53.42 MiB/s [2024-11-17T15:10:51.754Z] 13805.32 IOPS, 53.93 MiB/s [2024-11-17T15:10:51.754Z] 13907.83 IOPS, 54.33 MiB/s [2024-11-17T15:10:51.754Z] 13904.12 IOPS, 54.31 MiB/s [2024-11-17T15:10:51.754Z] 13889.28 IOPS, 54.26 MiB/s [2024-11-17T15:10:51.754Z] [2024-11-17 16:10:48.404096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:48.404157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:48.404207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:70296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:48.404224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:48.404241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:70312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:48.404256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:48.404272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:69864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:48.404287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:48.404302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:70328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:48.404377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:48.404394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:70344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:48.404410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:48.404427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436f000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:48.404441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:48.404825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:70360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.210 [2024-11-17 16:10:48.404849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:06.210 [2024-11-17 16:10:48.404871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004381000 len:0x1000 key:0x182900 00:34:06.210 [2024-11-17 16:10:48.404892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.404909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:69968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.404923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.404938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.404953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.404968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436b000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.404982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.404998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004345000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004353000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004355000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:70416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:69848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004385000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004395000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437b000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:70448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004387000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:70464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004369000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:70488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:70512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004343000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:70096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004375000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:70560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:70584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:70136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004349000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004367000 len:0x1000 key:0x182900 00:34:06.211 [2024-11-17 16:10:48.405813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:06.211 [2024-11-17 16:10:48.405882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:70616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.211 [2024-11-17 16:10:48.405900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.405916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.405930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.405945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:70640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.405959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.405974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.405989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439d000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.406055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:70672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.406114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.406145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:70704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:70728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:70744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:70120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439b000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.406295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.406354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004347000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.406446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004383000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.406475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:06.212 [2024-11-17 16:10:48.406536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.406573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004399000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.406605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:06.212 [2024-11-17 16:10:48.406620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x182900 00:34:06.212 [2024-11-17 16:10:48.406635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:06.212 13935.77 IOPS, 54.44 MiB/s [2024-11-17T15:10:51.756Z] 14023.52 IOPS, 54.78 MiB/s [2024-11-17T15:10:51.756Z] Received shutdown signal, test time was about 27.693657 seconds 00:34:06.212 00:34:06.212 Latency(us) 00:34:06.212 [2024-11-17T15:10:51.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:06.212 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:06.212 Verification LBA range: start 0x0 length 0x4000 00:34:06.212 Nvme0n1 : 27.69 14065.00 54.94 0.00 0.00 9079.12 1690.83 3019898.88 00:34:06.212 [2024-11-17T15:10:51.756Z] =================================================================================================================== 00:34:06.212 [2024-11-17T15:10:51.756Z] Total : 14065.00 54.94 0.00 0.00 9079.12 1690.83 3019898.88 00:34:06.212 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:06.471 rmmod nvme_rdma 00:34:06.471 rmmod nvme_fabrics 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3511415 ']' 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3511415 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3511415 ']' 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3511415 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3511415 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3511415' 00:34:06.471 killing process with pid 3511415 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3511415 00:34:06.471 16:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3511415 00:34:07.846 16:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:07.846 16:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:07.846 00:34:07.846 real 0m41.234s 00:34:07.846 user 1m55.482s 00:34:07.846 sys 0m9.436s 00:34:07.846 16:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:08.105 ************************************ 00:34:08.105 END TEST nvmf_host_multipath_status 00:34:08.105 ************************************ 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.105 ************************************ 00:34:08.105 START TEST nvmf_discovery_remove_ifc 00:34:08.105 ************************************ 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:34:08.105 * Looking for test storage... 00:34:08.105 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:08.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.105 --rc genhtml_branch_coverage=1 00:34:08.105 --rc genhtml_function_coverage=1 00:34:08.105 --rc genhtml_legend=1 00:34:08.105 --rc geninfo_all_blocks=1 00:34:08.105 --rc geninfo_unexecuted_blocks=1 00:34:08.105 00:34:08.105 ' 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:08.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.105 --rc genhtml_branch_coverage=1 00:34:08.105 --rc genhtml_function_coverage=1 00:34:08.105 --rc genhtml_legend=1 00:34:08.105 --rc geninfo_all_blocks=1 00:34:08.105 --rc geninfo_unexecuted_blocks=1 00:34:08.105 00:34:08.105 ' 00:34:08.105 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:08.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.106 --rc genhtml_branch_coverage=1 00:34:08.106 --rc genhtml_function_coverage=1 00:34:08.106 --rc genhtml_legend=1 00:34:08.106 --rc geninfo_all_blocks=1 00:34:08.106 --rc geninfo_unexecuted_blocks=1 00:34:08.106 00:34:08.106 ' 00:34:08.106 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:08.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.106 --rc genhtml_branch_coverage=1 00:34:08.106 --rc genhtml_function_coverage=1 00:34:08.106 --rc genhtml_legend=1 00:34:08.106 --rc geninfo_all_blocks=1 00:34:08.106 --rc geninfo_unexecuted_blocks=1 00:34:08.106 00:34:08.106 ' 00:34:08.106 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.364 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:08.364 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.364 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.364 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.364 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:08.365 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:34:08.365 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:34:08.365 00:34:08.365 real 0m0.213s 00:34:08.365 user 0m0.125s 00:34:08.365 sys 0m0.106s 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.365 ************************************ 00:34:08.365 END TEST nvmf_discovery_remove_ifc 00:34:08.365 ************************************ 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.365 ************************************ 00:34:08.365 START TEST nvmf_identify_kernel_target 00:34:08.365 ************************************ 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:34:08.365 * Looking for test storage... 00:34:08.365 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:08.365 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:08.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.624 --rc genhtml_branch_coverage=1 00:34:08.624 --rc genhtml_function_coverage=1 00:34:08.624 --rc genhtml_legend=1 00:34:08.624 --rc geninfo_all_blocks=1 00:34:08.624 --rc geninfo_unexecuted_blocks=1 00:34:08.624 00:34:08.624 ' 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:08.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.624 --rc genhtml_branch_coverage=1 00:34:08.624 --rc genhtml_function_coverage=1 00:34:08.624 --rc genhtml_legend=1 00:34:08.624 --rc geninfo_all_blocks=1 00:34:08.624 --rc geninfo_unexecuted_blocks=1 00:34:08.624 00:34:08.624 ' 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:08.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.624 --rc genhtml_branch_coverage=1 00:34:08.624 --rc genhtml_function_coverage=1 00:34:08.624 --rc genhtml_legend=1 00:34:08.624 --rc geninfo_all_blocks=1 00:34:08.624 --rc geninfo_unexecuted_blocks=1 00:34:08.624 00:34:08.624 ' 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:08.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:08.624 --rc genhtml_branch_coverage=1 00:34:08.624 --rc genhtml_function_coverage=1 00:34:08.624 --rc genhtml_legend=1 00:34:08.624 --rc geninfo_all_blocks=1 00:34:08.624 --rc geninfo_unexecuted_blocks=1 00:34:08.624 00:34:08.624 ' 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:08.624 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:08.625 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:08.625 16:10:53 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.181 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:15.182 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:15.182 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:15.182 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:15.182 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # rdma_device_init 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:15.182 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:15.182 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:15.182 altname enp217s0f0np0 00:34:15.182 altname ens818f0np0 00:34:15.182 inet 192.168.100.8/24 scope global mlx_0_0 00:34:15.182 valid_lft forever preferred_lft forever 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:15.182 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:15.182 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:15.182 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:15.182 altname enp217s0f1np1 00:34:15.183 altname ens818f1np1 00:34:15.183 inet 192.168.100.9/24 scope global mlx_0_1 00:34:15.183 valid_lft forever preferred_lft forever 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:15.183 192.168.100.9' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:15.183 192.168.100.9' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # head -n 1 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:15.183 192.168.100.9' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # head -n 1 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # tail -n +2 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:15.183 16:11:00 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:34:17.716 Waiting for block devices as requested 00:34:17.974 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:17.974 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:17.974 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:17.974 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:18.233 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:18.233 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:18.233 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:18.492 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:18.492 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:18.492 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:18.752 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:18.752 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:18.752 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:19.011 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:19.011 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:19.011 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:19.270 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:34:19.270 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:19.270 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:19.270 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:19.270 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:19.270 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:19.270 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:19.270 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:19.270 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:19.270 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:19.270 No valid GPT data, bailing 00:34:19.270 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo rdma 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:19.529 16:11:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:34:19.529 00:34:19.529 Discovery Log Number of Records 2, Generation counter 2 00:34:19.529 =====Discovery Log Entry 0====== 00:34:19.529 trtype: rdma 00:34:19.529 adrfam: ipv4 00:34:19.529 subtype: current discovery subsystem 00:34:19.529 treq: not specified, sq flow control disable supported 00:34:19.529 portid: 1 00:34:19.529 trsvcid: 4420 00:34:19.529 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:19.529 traddr: 192.168.100.8 00:34:19.529 eflags: none 00:34:19.529 rdma_prtype: not specified 00:34:19.529 rdma_qptype: connected 00:34:19.529 rdma_cms: rdma-cm 00:34:19.529 rdma_pkey: 0x0000 00:34:19.529 =====Discovery Log Entry 1====== 00:34:19.529 trtype: rdma 00:34:19.529 adrfam: ipv4 00:34:19.529 subtype: nvme subsystem 00:34:19.529 treq: not specified, sq flow control disable supported 00:34:19.529 portid: 1 00:34:19.529 trsvcid: 4420 00:34:19.529 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:19.529 traddr: 192.168.100.8 00:34:19.529 eflags: none 00:34:19.529 rdma_prtype: not specified 00:34:19.529 rdma_qptype: connected 00:34:19.529 rdma_cms: rdma-cm 00:34:19.529 rdma_pkey: 0x0000 00:34:19.529 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:34:19.529 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:19.789 ===================================================== 00:34:19.789 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:19.789 ===================================================== 00:34:19.789 Controller Capabilities/Features 00:34:19.789 ================================ 00:34:19.789 Vendor ID: 0000 00:34:19.789 Subsystem Vendor ID: 0000 00:34:19.789 Serial Number: 45d9fd228e0ee751f0be 00:34:19.789 Model Number: Linux 00:34:19.789 Firmware Version: 6.8.9-20 00:34:19.789 Recommended Arb Burst: 0 00:34:19.789 IEEE OUI Identifier: 00 00 00 00:34:19.789 Multi-path I/O 00:34:19.789 May have multiple subsystem ports: No 00:34:19.789 May have multiple controllers: No 00:34:19.789 Associated with SR-IOV VF: No 00:34:19.789 Max Data Transfer Size: Unlimited 00:34:19.789 Max Number of Namespaces: 0 00:34:19.789 Max Number of I/O Queues: 1024 00:34:19.789 NVMe Specification Version (VS): 1.3 00:34:19.789 NVMe Specification Version (Identify): 1.3 00:34:19.789 Maximum Queue Entries: 128 00:34:19.789 Contiguous Queues Required: No 00:34:19.789 Arbitration Mechanisms Supported 00:34:19.789 Weighted Round Robin: Not Supported 00:34:19.789 Vendor Specific: Not Supported 00:34:19.789 Reset Timeout: 7500 ms 00:34:19.789 Doorbell Stride: 4 bytes 00:34:19.789 NVM Subsystem Reset: Not Supported 00:34:19.789 Command Sets Supported 00:34:19.789 NVM Command Set: Supported 00:34:19.789 Boot Partition: Not Supported 00:34:19.789 Memory Page Size Minimum: 4096 bytes 00:34:19.789 Memory Page Size Maximum: 4096 bytes 00:34:19.789 Persistent Memory Region: Not Supported 00:34:19.789 Optional Asynchronous Events Supported 00:34:19.789 Namespace Attribute Notices: Not Supported 00:34:19.789 Firmware Activation Notices: Not Supported 00:34:19.789 ANA Change Notices: Not Supported 00:34:19.789 PLE Aggregate Log Change Notices: Not Supported 00:34:19.789 LBA Status Info Alert Notices: Not Supported 00:34:19.789 EGE Aggregate Log Change Notices: Not Supported 00:34:19.789 Normal NVM Subsystem Shutdown event: Not Supported 00:34:19.789 Zone Descriptor Change Notices: Not Supported 00:34:19.789 Discovery Log Change Notices: Supported 00:34:19.789 Controller Attributes 00:34:19.789 128-bit Host Identifier: Not Supported 00:34:19.789 Non-Operational Permissive Mode: Not Supported 00:34:19.789 NVM Sets: Not Supported 00:34:19.789 Read Recovery Levels: Not Supported 00:34:19.789 Endurance Groups: Not Supported 00:34:19.789 Predictable Latency Mode: Not Supported 00:34:19.789 Traffic Based Keep ALive: Not Supported 00:34:19.789 Namespace Granularity: Not Supported 00:34:19.789 SQ Associations: Not Supported 00:34:19.789 UUID List: Not Supported 00:34:19.789 Multi-Domain Subsystem: Not Supported 00:34:19.789 Fixed Capacity Management: Not Supported 00:34:19.789 Variable Capacity Management: Not Supported 00:34:19.789 Delete Endurance Group: Not Supported 00:34:19.789 Delete NVM Set: Not Supported 00:34:19.789 Extended LBA Formats Supported: Not Supported 00:34:19.789 Flexible Data Placement Supported: Not Supported 00:34:19.789 00:34:19.789 Controller Memory Buffer Support 00:34:19.789 ================================ 00:34:19.789 Supported: No 00:34:19.789 00:34:19.789 Persistent Memory Region Support 00:34:19.789 ================================ 00:34:19.789 Supported: No 00:34:19.789 00:34:19.789 Admin Command Set Attributes 00:34:19.789 ============================ 00:34:19.789 Security Send/Receive: Not Supported 00:34:19.789 Format NVM: Not Supported 00:34:19.789 Firmware Activate/Download: Not Supported 00:34:19.789 Namespace Management: Not Supported 00:34:19.789 Device Self-Test: Not Supported 00:34:19.789 Directives: Not Supported 00:34:19.789 NVMe-MI: Not Supported 00:34:19.789 Virtualization Management: Not Supported 00:34:19.789 Doorbell Buffer Config: Not Supported 00:34:19.789 Get LBA Status Capability: Not Supported 00:34:19.789 Command & Feature Lockdown Capability: Not Supported 00:34:19.789 Abort Command Limit: 1 00:34:19.789 Async Event Request Limit: 1 00:34:19.789 Number of Firmware Slots: N/A 00:34:19.789 Firmware Slot 1 Read-Only: N/A 00:34:19.789 Firmware Activation Without Reset: N/A 00:34:19.789 Multiple Update Detection Support: N/A 00:34:19.789 Firmware Update Granularity: No Information Provided 00:34:19.789 Per-Namespace SMART Log: No 00:34:19.789 Asymmetric Namespace Access Log Page: Not Supported 00:34:19.789 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:19.789 Command Effects Log Page: Not Supported 00:34:19.789 Get Log Page Extended Data: Supported 00:34:19.789 Telemetry Log Pages: Not Supported 00:34:19.789 Persistent Event Log Pages: Not Supported 00:34:19.789 Supported Log Pages Log Page: May Support 00:34:19.789 Commands Supported & Effects Log Page: Not Supported 00:34:19.789 Feature Identifiers & Effects Log Page:May Support 00:34:19.789 NVMe-MI Commands & Effects Log Page: May Support 00:34:19.789 Data Area 4 for Telemetry Log: Not Supported 00:34:19.789 Error Log Page Entries Supported: 1 00:34:19.789 Keep Alive: Not Supported 00:34:19.789 00:34:19.789 NVM Command Set Attributes 00:34:19.789 ========================== 00:34:19.789 Submission Queue Entry Size 00:34:19.789 Max: 1 00:34:19.789 Min: 1 00:34:19.789 Completion Queue Entry Size 00:34:19.789 Max: 1 00:34:19.789 Min: 1 00:34:19.789 Number of Namespaces: 0 00:34:19.789 Compare Command: Not Supported 00:34:19.789 Write Uncorrectable Command: Not Supported 00:34:19.789 Dataset Management Command: Not Supported 00:34:19.789 Write Zeroes Command: Not Supported 00:34:19.789 Set Features Save Field: Not Supported 00:34:19.789 Reservations: Not Supported 00:34:19.789 Timestamp: Not Supported 00:34:19.789 Copy: Not Supported 00:34:19.789 Volatile Write Cache: Not Present 00:34:19.789 Atomic Write Unit (Normal): 1 00:34:19.789 Atomic Write Unit (PFail): 1 00:34:19.789 Atomic Compare & Write Unit: 1 00:34:19.789 Fused Compare & Write: Not Supported 00:34:19.789 Scatter-Gather List 00:34:19.789 SGL Command Set: Supported 00:34:19.789 SGL Keyed: Supported 00:34:19.789 SGL Bit Bucket Descriptor: Not Supported 00:34:19.789 SGL Metadata Pointer: Not Supported 00:34:19.789 Oversized SGL: Not Supported 00:34:19.789 SGL Metadata Address: Not Supported 00:34:19.789 SGL Offset: Supported 00:34:19.789 Transport SGL Data Block: Not Supported 00:34:19.789 Replay Protected Memory Block: Not Supported 00:34:19.789 00:34:19.789 Firmware Slot Information 00:34:19.789 ========================= 00:34:19.789 Active slot: 0 00:34:19.789 00:34:19.789 00:34:19.789 Error Log 00:34:19.789 ========= 00:34:19.789 00:34:19.789 Active Namespaces 00:34:19.789 ================= 00:34:19.789 Discovery Log Page 00:34:19.789 ================== 00:34:19.789 Generation Counter: 2 00:34:19.789 Number of Records: 2 00:34:19.789 Record Format: 0 00:34:19.789 00:34:19.789 Discovery Log Entry 0 00:34:19.789 ---------------------- 00:34:19.789 Transport Type: 1 (RDMA) 00:34:19.789 Address Family: 1 (IPv4) 00:34:19.789 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:19.789 Entry Flags: 00:34:19.789 Duplicate Returned Information: 0 00:34:19.789 Explicit Persistent Connection Support for Discovery: 0 00:34:19.789 Transport Requirements: 00:34:19.789 Secure Channel: Not Specified 00:34:19.789 Port ID: 1 (0x0001) 00:34:19.789 Controller ID: 65535 (0xffff) 00:34:19.789 Admin Max SQ Size: 32 00:34:19.789 Transport Service Identifier: 4420 00:34:19.789 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:19.789 Transport Address: 192.168.100.8 00:34:19.789 Transport Specific Address Subtype - RDMA 00:34:19.789 RDMA QP Service Type: 1 (Reliable Connected) 00:34:19.789 RDMA Provider Type: 1 (No provider specified) 00:34:19.789 RDMA CM Service: 1 (RDMA_CM) 00:34:19.789 Discovery Log Entry 1 00:34:19.789 ---------------------- 00:34:19.789 Transport Type: 1 (RDMA) 00:34:19.789 Address Family: 1 (IPv4) 00:34:19.789 Subsystem Type: 2 (NVM Subsystem) 00:34:19.789 Entry Flags: 00:34:19.789 Duplicate Returned Information: 0 00:34:19.789 Explicit Persistent Connection Support for Discovery: 0 00:34:19.789 Transport Requirements: 00:34:19.790 Secure Channel: Not Specified 00:34:19.790 Port ID: 1 (0x0001) 00:34:19.790 Controller ID: 65535 (0xffff) 00:34:19.790 Admin Max SQ Size: 32 00:34:19.790 Transport Service Identifier: 4420 00:34:19.790 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:19.790 Transport Address: 192.168.100.8 00:34:19.790 Transport Specific Address Subtype - RDMA 00:34:19.790 RDMA QP Service Type: 1 (Reliable Connected) 00:34:19.790 RDMA Provider Type: 1 (No provider specified) 00:34:19.790 RDMA CM Service: 1 (RDMA_CM) 00:34:19.790 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:20.049 get_feature(0x01) failed 00:34:20.049 get_feature(0x02) failed 00:34:20.049 get_feature(0x04) failed 00:34:20.049 ===================================================== 00:34:20.049 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:34:20.049 ===================================================== 00:34:20.049 Controller Capabilities/Features 00:34:20.049 ================================ 00:34:20.049 Vendor ID: 0000 00:34:20.049 Subsystem Vendor ID: 0000 00:34:20.049 Serial Number: 1c5e7736e2182754cd88 00:34:20.049 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:20.049 Firmware Version: 6.8.9-20 00:34:20.049 Recommended Arb Burst: 6 00:34:20.049 IEEE OUI Identifier: 00 00 00 00:34:20.049 Multi-path I/O 00:34:20.049 May have multiple subsystem ports: Yes 00:34:20.049 May have multiple controllers: Yes 00:34:20.049 Associated with SR-IOV VF: No 00:34:20.049 Max Data Transfer Size: 1048576 00:34:20.049 Max Number of Namespaces: 1024 00:34:20.049 Max Number of I/O Queues: 128 00:34:20.049 NVMe Specification Version (VS): 1.3 00:34:20.049 NVMe Specification Version (Identify): 1.3 00:34:20.049 Maximum Queue Entries: 128 00:34:20.049 Contiguous Queues Required: No 00:34:20.049 Arbitration Mechanisms Supported 00:34:20.049 Weighted Round Robin: Not Supported 00:34:20.049 Vendor Specific: Not Supported 00:34:20.049 Reset Timeout: 7500 ms 00:34:20.049 Doorbell Stride: 4 bytes 00:34:20.049 NVM Subsystem Reset: Not Supported 00:34:20.049 Command Sets Supported 00:34:20.049 NVM Command Set: Supported 00:34:20.049 Boot Partition: Not Supported 00:34:20.049 Memory Page Size Minimum: 4096 bytes 00:34:20.049 Memory Page Size Maximum: 4096 bytes 00:34:20.049 Persistent Memory Region: Not Supported 00:34:20.049 Optional Asynchronous Events Supported 00:34:20.049 Namespace Attribute Notices: Supported 00:34:20.049 Firmware Activation Notices: Not Supported 00:34:20.049 ANA Change Notices: Supported 00:34:20.049 PLE Aggregate Log Change Notices: Not Supported 00:34:20.049 LBA Status Info Alert Notices: Not Supported 00:34:20.049 EGE Aggregate Log Change Notices: Not Supported 00:34:20.049 Normal NVM Subsystem Shutdown event: Not Supported 00:34:20.049 Zone Descriptor Change Notices: Not Supported 00:34:20.049 Discovery Log Change Notices: Not Supported 00:34:20.049 Controller Attributes 00:34:20.049 128-bit Host Identifier: Supported 00:34:20.049 Non-Operational Permissive Mode: Not Supported 00:34:20.049 NVM Sets: Not Supported 00:34:20.049 Read Recovery Levels: Not Supported 00:34:20.049 Endurance Groups: Not Supported 00:34:20.049 Predictable Latency Mode: Not Supported 00:34:20.049 Traffic Based Keep ALive: Supported 00:34:20.049 Namespace Granularity: Not Supported 00:34:20.049 SQ Associations: Not Supported 00:34:20.049 UUID List: Not Supported 00:34:20.049 Multi-Domain Subsystem: Not Supported 00:34:20.049 Fixed Capacity Management: Not Supported 00:34:20.049 Variable Capacity Management: Not Supported 00:34:20.049 Delete Endurance Group: Not Supported 00:34:20.049 Delete NVM Set: Not Supported 00:34:20.049 Extended LBA Formats Supported: Not Supported 00:34:20.049 Flexible Data Placement Supported: Not Supported 00:34:20.049 00:34:20.049 Controller Memory Buffer Support 00:34:20.049 ================================ 00:34:20.049 Supported: No 00:34:20.049 00:34:20.049 Persistent Memory Region Support 00:34:20.049 ================================ 00:34:20.049 Supported: No 00:34:20.049 00:34:20.049 Admin Command Set Attributes 00:34:20.049 ============================ 00:34:20.049 Security Send/Receive: Not Supported 00:34:20.049 Format NVM: Not Supported 00:34:20.049 Firmware Activate/Download: Not Supported 00:34:20.049 Namespace Management: Not Supported 00:34:20.049 Device Self-Test: Not Supported 00:34:20.049 Directives: Not Supported 00:34:20.049 NVMe-MI: Not Supported 00:34:20.049 Virtualization Management: Not Supported 00:34:20.049 Doorbell Buffer Config: Not Supported 00:34:20.049 Get LBA Status Capability: Not Supported 00:34:20.049 Command & Feature Lockdown Capability: Not Supported 00:34:20.049 Abort Command Limit: 4 00:34:20.049 Async Event Request Limit: 4 00:34:20.049 Number of Firmware Slots: N/A 00:34:20.049 Firmware Slot 1 Read-Only: N/A 00:34:20.049 Firmware Activation Without Reset: N/A 00:34:20.049 Multiple Update Detection Support: N/A 00:34:20.049 Firmware Update Granularity: No Information Provided 00:34:20.049 Per-Namespace SMART Log: Yes 00:34:20.049 Asymmetric Namespace Access Log Page: Supported 00:34:20.049 ANA Transition Time : 10 sec 00:34:20.049 00:34:20.049 Asymmetric Namespace Access Capabilities 00:34:20.049 ANA Optimized State : Supported 00:34:20.049 ANA Non-Optimized State : Supported 00:34:20.049 ANA Inaccessible State : Supported 00:34:20.049 ANA Persistent Loss State : Supported 00:34:20.049 ANA Change State : Supported 00:34:20.049 ANAGRPID is not changed : No 00:34:20.049 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:20.049 00:34:20.049 ANA Group Identifier Maximum : 128 00:34:20.049 Number of ANA Group Identifiers : 128 00:34:20.049 Max Number of Allowed Namespaces : 1024 00:34:20.049 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:20.049 Command Effects Log Page: Supported 00:34:20.049 Get Log Page Extended Data: Supported 00:34:20.049 Telemetry Log Pages: Not Supported 00:34:20.049 Persistent Event Log Pages: Not Supported 00:34:20.049 Supported Log Pages Log Page: May Support 00:34:20.049 Commands Supported & Effects Log Page: Not Supported 00:34:20.049 Feature Identifiers & Effects Log Page:May Support 00:34:20.049 NVMe-MI Commands & Effects Log Page: May Support 00:34:20.049 Data Area 4 for Telemetry Log: Not Supported 00:34:20.049 Error Log Page Entries Supported: 128 00:34:20.049 Keep Alive: Supported 00:34:20.049 Keep Alive Granularity: 1000 ms 00:34:20.049 00:34:20.049 NVM Command Set Attributes 00:34:20.049 ========================== 00:34:20.049 Submission Queue Entry Size 00:34:20.050 Max: 64 00:34:20.050 Min: 64 00:34:20.050 Completion Queue Entry Size 00:34:20.050 Max: 16 00:34:20.050 Min: 16 00:34:20.050 Number of Namespaces: 1024 00:34:20.050 Compare Command: Not Supported 00:34:20.050 Write Uncorrectable Command: Not Supported 00:34:20.050 Dataset Management Command: Supported 00:34:20.050 Write Zeroes Command: Supported 00:34:20.050 Set Features Save Field: Not Supported 00:34:20.050 Reservations: Not Supported 00:34:20.050 Timestamp: Not Supported 00:34:20.050 Copy: Not Supported 00:34:20.050 Volatile Write Cache: Present 00:34:20.050 Atomic Write Unit (Normal): 1 00:34:20.050 Atomic Write Unit (PFail): 1 00:34:20.050 Atomic Compare & Write Unit: 1 00:34:20.050 Fused Compare & Write: Not Supported 00:34:20.050 Scatter-Gather List 00:34:20.050 SGL Command Set: Supported 00:34:20.050 SGL Keyed: Supported 00:34:20.050 SGL Bit Bucket Descriptor: Not Supported 00:34:20.050 SGL Metadata Pointer: Not Supported 00:34:20.050 Oversized SGL: Not Supported 00:34:20.050 SGL Metadata Address: Not Supported 00:34:20.050 SGL Offset: Supported 00:34:20.050 Transport SGL Data Block: Not Supported 00:34:20.050 Replay Protected Memory Block: Not Supported 00:34:20.050 00:34:20.050 Firmware Slot Information 00:34:20.050 ========================= 00:34:20.050 Active slot: 0 00:34:20.050 00:34:20.050 Asymmetric Namespace Access 00:34:20.050 =========================== 00:34:20.050 Change Count : 0 00:34:20.050 Number of ANA Group Descriptors : 1 00:34:20.050 ANA Group Descriptor : 0 00:34:20.050 ANA Group ID : 1 00:34:20.050 Number of NSID Values : 1 00:34:20.050 Change Count : 0 00:34:20.050 ANA State : 1 00:34:20.050 Namespace Identifier : 1 00:34:20.050 00:34:20.050 Commands Supported and Effects 00:34:20.050 ============================== 00:34:20.050 Admin Commands 00:34:20.050 -------------- 00:34:20.050 Get Log Page (02h): Supported 00:34:20.050 Identify (06h): Supported 00:34:20.050 Abort (08h): Supported 00:34:20.050 Set Features (09h): Supported 00:34:20.050 Get Features (0Ah): Supported 00:34:20.050 Asynchronous Event Request (0Ch): Supported 00:34:20.050 Keep Alive (18h): Supported 00:34:20.050 I/O Commands 00:34:20.050 ------------ 00:34:20.050 Flush (00h): Supported 00:34:20.050 Write (01h): Supported LBA-Change 00:34:20.050 Read (02h): Supported 00:34:20.050 Write Zeroes (08h): Supported LBA-Change 00:34:20.050 Dataset Management (09h): Supported 00:34:20.050 00:34:20.050 Error Log 00:34:20.050 ========= 00:34:20.050 Entry: 0 00:34:20.050 Error Count: 0x3 00:34:20.050 Submission Queue Id: 0x0 00:34:20.050 Command Id: 0x5 00:34:20.050 Phase Bit: 0 00:34:20.050 Status Code: 0x2 00:34:20.050 Status Code Type: 0x0 00:34:20.050 Do Not Retry: 1 00:34:20.050 Error Location: 0x28 00:34:20.050 LBA: 0x0 00:34:20.050 Namespace: 0x0 00:34:20.050 Vendor Log Page: 0x0 00:34:20.050 ----------- 00:34:20.050 Entry: 1 00:34:20.050 Error Count: 0x2 00:34:20.050 Submission Queue Id: 0x0 00:34:20.050 Command Id: 0x5 00:34:20.050 Phase Bit: 0 00:34:20.050 Status Code: 0x2 00:34:20.050 Status Code Type: 0x0 00:34:20.050 Do Not Retry: 1 00:34:20.050 Error Location: 0x28 00:34:20.050 LBA: 0x0 00:34:20.050 Namespace: 0x0 00:34:20.050 Vendor Log Page: 0x0 00:34:20.050 ----------- 00:34:20.050 Entry: 2 00:34:20.050 Error Count: 0x1 00:34:20.050 Submission Queue Id: 0x0 00:34:20.050 Command Id: 0x0 00:34:20.050 Phase Bit: 0 00:34:20.050 Status Code: 0x2 00:34:20.050 Status Code Type: 0x0 00:34:20.050 Do Not Retry: 1 00:34:20.050 Error Location: 0x28 00:34:20.050 LBA: 0x0 00:34:20.050 Namespace: 0x0 00:34:20.050 Vendor Log Page: 0x0 00:34:20.050 00:34:20.050 Number of Queues 00:34:20.050 ================ 00:34:20.050 Number of I/O Submission Queues: 128 00:34:20.050 Number of I/O Completion Queues: 128 00:34:20.050 00:34:20.050 ZNS Specific Controller Data 00:34:20.050 ============================ 00:34:20.050 Zone Append Size Limit: 0 00:34:20.050 00:34:20.050 00:34:20.050 Active Namespaces 00:34:20.050 ================= 00:34:20.050 get_feature(0x05) failed 00:34:20.050 Namespace ID:1 00:34:20.050 Command Set Identifier: NVM (00h) 00:34:20.050 Deallocate: Supported 00:34:20.050 Deallocated/Unwritten Error: Not Supported 00:34:20.050 Deallocated Read Value: Unknown 00:34:20.050 Deallocate in Write Zeroes: Not Supported 00:34:20.050 Deallocated Guard Field: 0xFFFF 00:34:20.050 Flush: Supported 00:34:20.050 Reservation: Not Supported 00:34:20.050 Namespace Sharing Capabilities: Multiple Controllers 00:34:20.050 Size (in LBAs): 3907029168 (1863GiB) 00:34:20.050 Capacity (in LBAs): 3907029168 (1863GiB) 00:34:20.050 Utilization (in LBAs): 3907029168 (1863GiB) 00:34:20.050 UUID: 78c05cfa-2dfd-416c-844f-2ebbf48d7764 00:34:20.050 Thin Provisioning: Not Supported 00:34:20.050 Per-NS Atomic Units: Yes 00:34:20.050 Atomic Boundary Size (Normal): 0 00:34:20.050 Atomic Boundary Size (PFail): 0 00:34:20.050 Atomic Boundary Offset: 0 00:34:20.050 NGUID/EUI64 Never Reused: No 00:34:20.050 ANA group ID: 1 00:34:20.050 Namespace Write Protected: No 00:34:20.050 Number of LBA Formats: 1 00:34:20.050 Current LBA Format: LBA Format #00 00:34:20.050 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:20.050 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:34:20.050 rmmod nvme_rdma 00:34:20.050 rmmod nvme_fabrics 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:20.050 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:34:20.309 16:11:05 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:34:23.590 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:23.590 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:25.505 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:34:25.763 00:34:25.763 real 0m17.325s 00:34:25.763 user 0m4.568s 00:34:25.763 sys 0m9.957s 00:34:25.763 16:11:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.763 16:11:11 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:25.763 ************************************ 00:34:25.763 END TEST nvmf_identify_kernel_target 00:34:25.763 ************************************ 00:34:25.763 16:11:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:34:25.763 16:11:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:25.763 16:11:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.763 16:11:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.763 ************************************ 00:34:25.763 START TEST nvmf_auth_host 00:34:25.763 ************************************ 00:34:25.763 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:34:25.763 * Looking for test storage... 00:34:25.763 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:34:25.763 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:25.763 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:34:25.763 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:26.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.022 --rc genhtml_branch_coverage=1 00:34:26.022 --rc genhtml_function_coverage=1 00:34:26.022 --rc genhtml_legend=1 00:34:26.022 --rc geninfo_all_blocks=1 00:34:26.022 --rc geninfo_unexecuted_blocks=1 00:34:26.022 00:34:26.022 ' 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:26.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.022 --rc genhtml_branch_coverage=1 00:34:26.022 --rc genhtml_function_coverage=1 00:34:26.022 --rc genhtml_legend=1 00:34:26.022 --rc geninfo_all_blocks=1 00:34:26.022 --rc geninfo_unexecuted_blocks=1 00:34:26.022 00:34:26.022 ' 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:26.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.022 --rc genhtml_branch_coverage=1 00:34:26.022 --rc genhtml_function_coverage=1 00:34:26.022 --rc genhtml_legend=1 00:34:26.022 --rc geninfo_all_blocks=1 00:34:26.022 --rc geninfo_unexecuted_blocks=1 00:34:26.022 00:34:26.022 ' 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:26.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.022 --rc genhtml_branch_coverage=1 00:34:26.022 --rc genhtml_function_coverage=1 00:34:26.022 --rc genhtml_legend=1 00:34:26.022 --rc geninfo_all_blocks=1 00:34:26.022 --rc geninfo_unexecuted_blocks=1 00:34:26.022 00:34:26.022 ' 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.022 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:26.023 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:26.023 16:11:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.580 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:34:32.581 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:34:32.581 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:34:32.581 Found net devices under 0000:d9:00.0: mlx_0_0 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:34:32.581 Found net devices under 0000:d9:00.1: mlx_0_1 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # rdma_device_init 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # allocate_nic_ips 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:34:32.581 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:32.581 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:34:32.581 altname enp217s0f0np0 00:34:32.581 altname ens818f0np0 00:34:32.581 inet 192.168.100.8/24 scope global mlx_0_0 00:34:32.581 valid_lft forever preferred_lft forever 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:34:32.581 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:34:32.581 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:34:32.581 altname enp217s0f1np1 00:34:32.581 altname ens818f1np1 00:34:32.581 inet 192.168.100.9/24 scope global mlx_0_1 00:34:32.581 valid_lft forever preferred_lft forever 00:34:32.581 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:34:32.582 192.168.100.9' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:34:32.582 192.168.100.9' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # head -n 1 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:34:32.582 192.168.100.9' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # tail -n +2 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # head -n 1 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3526978 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3526978 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3526978 ']' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.582 16:11:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:32.840 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.840 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:32.840 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:32.840 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4d88bb1f764ccdb67f1eddb8ad1097c8 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FAj 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4d88bb1f764ccdb67f1eddb8ad1097c8 0 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4d88bb1f764ccdb67f1eddb8ad1097c8 0 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4d88bb1f764ccdb67f1eddb8ad1097c8 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:32.841 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:33.099 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FAj 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FAj 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.FAj 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b77e0d66ad906568fdf8fa6bbbefeb51f27bedb318dc59b2bd926e3622c5959b 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kmh 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b77e0d66ad906568fdf8fa6bbbefeb51f27bedb318dc59b2bd926e3622c5959b 3 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b77e0d66ad906568fdf8fa6bbbefeb51f27bedb318dc59b2bd926e3622c5959b 3 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b77e0d66ad906568fdf8fa6bbbefeb51f27bedb318dc59b2bd926e3622c5959b 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kmh 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kmh 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.kmh 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1e052216067ac4324e24fbdfea6aabda93fcc440fe58c24e 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.24T 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1e052216067ac4324e24fbdfea6aabda93fcc440fe58c24e 0 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1e052216067ac4324e24fbdfea6aabda93fcc440fe58c24e 0 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1e052216067ac4324e24fbdfea6aabda93fcc440fe58c24e 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.24T 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.24T 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.24T 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=49b93f1bba418804d8ec334e2d5d53a4e91569b7274756ce 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.KTj 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 49b93f1bba418804d8ec334e2d5d53a4e91569b7274756ce 2 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 49b93f1bba418804d8ec334e2d5d53a4e91569b7274756ce 2 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=49b93f1bba418804d8ec334e2d5d53a4e91569b7274756ce 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.KTj 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.KTj 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.KTj 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7fbbe52c7b06291565ba1bb97feb21c0 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZnV 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7fbbe52c7b06291565ba1bb97feb21c0 1 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7fbbe52c7b06291565ba1bb97feb21c0 1 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7fbbe52c7b06291565ba1bb97feb21c0 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:33.100 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZnV 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZnV 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZnV 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=20181480d989d3a56d222570a99b7089 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.p6a 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 20181480d989d3a56d222570a99b7089 1 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 20181480d989d3a56d222570a99b7089 1 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=20181480d989d3a56d222570a99b7089 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.p6a 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.p6a 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.p6a 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cb9142e56438077670c5537b4e9dfcc2b348f1f75dc9b7f5 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.B1i 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cb9142e56438077670c5537b4e9dfcc2b348f1f75dc9b7f5 2 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cb9142e56438077670c5537b4e9dfcc2b348f1f75dc9b7f5 2 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cb9142e56438077670c5537b4e9dfcc2b348f1f75dc9b7f5 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.B1i 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.B1i 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.B1i 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:33.405 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=553d42528077254e4deb59bc9191d82d 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Kq5 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 553d42528077254e4deb59bc9191d82d 0 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 553d42528077254e4deb59bc9191d82d 0 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=553d42528077254e4deb59bc9191d82d 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Kq5 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Kq5 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Kq5 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ff1ef4456c876c8c613516cf087bfc4aeaf8f729dc341249c9d4a75453926f20 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.l2l 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ff1ef4456c876c8c613516cf087bfc4aeaf8f729dc341249c9d4a75453926f20 3 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ff1ef4456c876c8c613516cf087bfc4aeaf8f729dc341249c9d4a75453926f20 3 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ff1ef4456c876c8c613516cf087bfc4aeaf8f729dc341249c9d4a75453926f20 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.l2l 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.l2l 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.l2l 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3526978 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3526978 ']' 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:33.406 16:11:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.FAj 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.kmh ]] 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kmh 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.24T 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.KTj ]] 00:34:33.735 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KTj 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZnV 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.p6a ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p6a 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.B1i 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Kq5 ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Kq5 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.l2l 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:33.736 16:11:19 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:34:37.063 Waiting for block devices as requested 00:34:37.063 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:37.063 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:37.063 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:37.063 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:37.063 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:37.321 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:37.321 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:37.321 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:37.578 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:37.578 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:37.578 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:37.836 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:37.836 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:37.836 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:37.836 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:38.094 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:38.094 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:39.030 No valid GPT data, bailing 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 192.168.100.8 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo rdma 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -t rdma -s 4420 00:34:39.030 00:34:39.030 Discovery Log Number of Records 2, Generation counter 2 00:34:39.030 =====Discovery Log Entry 0====== 00:34:39.030 trtype: rdma 00:34:39.030 adrfam: ipv4 00:34:39.030 subtype: current discovery subsystem 00:34:39.030 treq: not specified, sq flow control disable supported 00:34:39.030 portid: 1 00:34:39.030 trsvcid: 4420 00:34:39.030 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:39.030 traddr: 192.168.100.8 00:34:39.030 eflags: none 00:34:39.030 rdma_prtype: not specified 00:34:39.030 rdma_qptype: connected 00:34:39.030 rdma_cms: rdma-cm 00:34:39.030 rdma_pkey: 0x0000 00:34:39.030 =====Discovery Log Entry 1====== 00:34:39.030 trtype: rdma 00:34:39.030 adrfam: ipv4 00:34:39.030 subtype: nvme subsystem 00:34:39.030 treq: not specified, sq flow control disable supported 00:34:39.030 portid: 1 00:34:39.030 trsvcid: 4420 00:34:39.030 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:39.030 traddr: 192.168.100.8 00:34:39.030 eflags: none 00:34:39.030 rdma_prtype: not specified 00:34:39.030 rdma_qptype: connected 00:34:39.030 rdma_cms: rdma-cm 00:34:39.030 rdma_pkey: 0x0000 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:39.030 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:39.031 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.031 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.289 nvme0n1 00:34:39.289 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.289 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.289 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.289 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.290 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.290 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.290 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.290 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.290 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.290 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.549 16:11:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.549 nvme0n1 00:34:39.549 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.549 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.549 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.549 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.549 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.549 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.808 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.067 nvme0n1 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:40.067 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.068 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.327 nvme0n1 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.327 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.586 nvme0n1 00:34:40.586 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.586 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.586 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.586 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.586 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.586 16:11:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.586 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.845 nvme0n1 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.845 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.103 nvme0n1 00:34:41.103 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.103 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.103 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.103 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.103 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.103 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.103 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.103 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.103 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.103 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:41.361 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.362 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.621 nvme0n1 00:34:41.621 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.621 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.621 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.621 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.621 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.621 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.621 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.621 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.621 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.621 16:11:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.621 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.881 nvme0n1 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.881 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.138 nvme0n1 00:34:42.138 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.138 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.138 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.138 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.138 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.138 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.138 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.138 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.138 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.138 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.139 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.396 nvme0n1 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.396 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.654 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.654 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.654 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.654 16:11:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.654 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.912 nvme0n1 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:42.912 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.913 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.479 nvme0n1 00:34:43.479 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.479 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.479 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.479 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.479 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.479 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.479 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.480 16:11:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.739 nvme0n1 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.739 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.305 nvme0n1 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.305 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.306 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.564 nvme0n1 00:34:44.564 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.564 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.564 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.564 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.564 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.564 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.564 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.564 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.564 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.564 16:11:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.564 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.129 nvme0n1 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.129 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.130 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.696 nvme0n1 00:34:45.696 16:11:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.696 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.262 nvme0n1 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.262 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.263 16:11:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.521 nvme0n1 00:34:46.521 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.521 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.521 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.521 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.521 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.521 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.779 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.037 nvme0n1 00:34:47.037 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.037 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.037 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.037 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.037 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:47.295 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:47.296 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:47.296 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.296 16:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.861 nvme0n1 00:34:47.861 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.861 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.861 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.861 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.861 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.862 16:11:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.795 nvme0n1 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:48.795 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.796 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.361 nvme0n1 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.361 16:11:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.926 nvme0n1 00:34:49.926 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.926 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.926 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.926 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.926 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.926 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.184 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.185 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.185 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.185 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:50.185 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:50.185 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:50.185 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:50.185 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:50.185 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:50.185 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.185 16:11:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.750 nvme0n1 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.750 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.008 nvme0n1 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:51.008 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.009 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.009 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.266 nvme0n1 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.266 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.524 16:11:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.524 nvme0n1 00:34:51.524 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.524 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.524 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.524 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.524 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.782 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.782 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.782 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.782 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.782 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.782 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.782 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.782 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:51.782 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.783 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.041 nvme0n1 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.041 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.300 nvme0n1 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.300 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.558 nvme0n1 00:34:52.558 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.558 16:11:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.558 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.817 nvme0n1 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.817 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:53.075 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.076 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.334 nvme0n1 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.334 16:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.592 nvme0n1 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.592 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.593 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.850 nvme0n1 00:34:53.850 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.850 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.850 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.850 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.850 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.850 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.850 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.850 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.850 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.850 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.112 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.371 nvme0n1 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.371 16:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.630 nvme0n1 00:34:54.630 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.630 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.630 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.630 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.630 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.630 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.888 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.146 nvme0n1 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.147 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.713 nvme0n1 00:34:55.713 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.713 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.713 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.713 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.713 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.713 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.713 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.713 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.713 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.713 16:11:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:55.713 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.714 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.972 nvme0n1 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.972 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.973 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.539 nvme0n1 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.539 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.540 16:11:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.107 nvme0n1 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.107 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.673 nvme0n1 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:57.673 16:11:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:57.673 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:57.673 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.673 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.931 nvme0n1 00:34:57.931 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.931 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.931 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.931 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.931 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.931 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.189 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.447 nvme0n1 00:34:58.447 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.448 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.448 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.448 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.448 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.448 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.706 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.706 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.706 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.706 16:11:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.706 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.273 nvme0n1 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:59.273 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.274 16:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.840 nvme0n1 00:34:59.840 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:00.098 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.099 16:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.665 nvme0n1 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.666 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.600 nvme0n1 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.600 16:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.167 nvme0n1 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:02.167 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:02.168 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:02.168 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.168 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.426 nvme0n1 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.426 16:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.684 nvme0n1 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.684 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.950 nvme0n1 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.950 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.293 nvme0n1 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.293 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:03.554 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.555 16:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.555 nvme0n1 00:35:03.555 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.555 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.555 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.555 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.555 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.555 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:03.811 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.812 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.069 nvme0n1 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.069 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.070 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.328 nvme0n1 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.328 16:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.586 nvme0n1 00:35:04.586 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.586 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.586 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.586 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.586 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.586 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.586 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.586 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:04.587 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.845 nvme0n1 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.845 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.103 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.104 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.104 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:05.104 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:05.104 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:05.104 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:05.104 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:05.104 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:05.104 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.104 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.363 nvme0n1 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.363 16:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.622 nvme0n1 00:35:05.622 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.622 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.622 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.622 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.622 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.622 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.622 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.622 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.622 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.622 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.880 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.881 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.139 nvme0n1 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.139 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.398 nvme0n1 00:35:06.398 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.398 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.398 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.398 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.398 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.398 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.656 16:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.656 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.915 nvme0n1 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.915 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.173 nvme0n1 00:35:07.173 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:07.431 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.432 16:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.012 nvme0n1 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:08.012 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.013 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.271 nvme0n1 00:35:08.271 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.271 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.271 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.271 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.271 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.271 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.529 16:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.787 nvme0n1 00:35:08.787 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.787 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.787 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.787 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.787 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.787 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.046 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.304 nvme0n1 00:35:09.304 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.304 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.304 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.304 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.304 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.304 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.562 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.563 16:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.820 nvme0n1 00:35:09.820 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.820 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.820 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.820 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.820 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.820 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGQ4OGJiMWY3NjRjY2RiNjdmMWVkZGI4YWQxMDk3YzjSqhNS: 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: ]] 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc3ZTBkNjZhZDkwNjU2OGZkZjhmYTZiYmJlZmViNTFmMjdiZWRiMzE4ZGM1OWIyYmQ5MjZlMzYyMmM1OTU5YvELnU4=: 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:10.078 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:10.079 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:10.079 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.079 16:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.645 nvme0n1 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.645 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.579 nvme0n1 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.579 16:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.145 nvme0n1 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:Y2I5MTQyZTU2NDM4MDc3NjcwYzU1MzdiNGU5ZGZjYzJiMzQ4ZjFmNzVkYzliN2Y1ASQLeA==: 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: ]] 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTUzZDQyNTI4MDc3MjU0ZTRkZWI1OWJjOTE5MWQ4MmQlgEtX: 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:12.145 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:12.146 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:12.146 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:12.146 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:12.146 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:12.146 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.146 16:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.711 nvme0n1 00:35:12.711 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.711 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.711 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.711 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.711 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.711 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.711 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.711 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.711 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.711 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZmYxZWY0NDU2Yzg3NmM4YzYxMzUxNmNmMDg3YmZjNGFlYWY4ZjcyOWRjMzQxMjQ5YzlkNGE3NTQ1MzkyNmYyMCtkUjU=: 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:12.969 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:12.970 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.970 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.536 nvme0n1 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:13.536 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:13.537 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:13.537 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:13.537 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:13.537 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:13.537 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.537 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:13.537 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.537 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:13.537 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.537 16:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.793 request: 00:35:13.793 { 00:35:13.793 "name": "nvme0", 00:35:13.793 "trtype": "rdma", 00:35:13.793 "traddr": "192.168.100.8", 00:35:13.793 "adrfam": "ipv4", 00:35:13.793 "trsvcid": "4420", 00:35:13.793 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:13.793 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:13.793 "prchk_reftag": false, 00:35:13.793 "prchk_guard": false, 00:35:13.793 "hdgst": false, 00:35:13.793 "ddgst": false, 00:35:13.793 "allow_unrecognized_csi": false, 00:35:13.793 "method": "bdev_nvme_attach_controller", 00:35:13.793 "req_id": 1 00:35:13.793 } 00:35:13.793 Got JSON-RPC error response 00:35:13.793 response: 00:35:13.793 { 00:35:13.793 "code": -5, 00:35:13.793 "message": "Input/output error" 00:35:13.793 } 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.793 request: 00:35:13.793 { 00:35:13.793 "name": "nvme0", 00:35:13.793 "trtype": "rdma", 00:35:13.793 "traddr": "192.168.100.8", 00:35:13.793 "adrfam": "ipv4", 00:35:13.793 "trsvcid": "4420", 00:35:13.793 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:13.793 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:13.793 "prchk_reftag": false, 00:35:13.793 "prchk_guard": false, 00:35:13.793 "hdgst": false, 00:35:13.793 "ddgst": false, 00:35:13.793 "dhchap_key": "key2", 00:35:13.793 "allow_unrecognized_csi": false, 00:35:13.793 "method": "bdev_nvme_attach_controller", 00:35:13.793 "req_id": 1 00:35:13.793 } 00:35:13.793 Got JSON-RPC error response 00:35:13.793 response: 00:35:13.793 { 00:35:13.793 "code": -5, 00:35:13.793 "message": "Input/output error" 00:35:13.793 } 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.793 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.051 request: 00:35:14.051 { 00:35:14.051 "name": "nvme0", 00:35:14.051 "trtype": "rdma", 00:35:14.051 "traddr": "192.168.100.8", 00:35:14.051 "adrfam": "ipv4", 00:35:14.051 "trsvcid": "4420", 00:35:14.051 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:14.051 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:14.051 "prchk_reftag": false, 00:35:14.051 "prchk_guard": false, 00:35:14.051 "hdgst": false, 00:35:14.051 "ddgst": false, 00:35:14.051 "dhchap_key": "key1", 00:35:14.051 "dhchap_ctrlr_key": "ckey2", 00:35:14.051 "allow_unrecognized_csi": false, 00:35:14.051 "method": "bdev_nvme_attach_controller", 00:35:14.051 "req_id": 1 00:35:14.051 } 00:35:14.051 Got JSON-RPC error response 00:35:14.051 response: 00:35:14.051 { 00:35:14.051 "code": -5, 00:35:14.051 "message": "Input/output error" 00:35:14.051 } 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.051 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.309 nvme0n1 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.309 request: 00:35:14.309 { 00:35:14.309 "name": "nvme0", 00:35:14.309 "dhchap_key": "key1", 00:35:14.309 "dhchap_ctrlr_key": "ckey2", 00:35:14.309 "method": "bdev_nvme_set_keys", 00:35:14.309 "req_id": 1 00:35:14.309 } 00:35:14.309 Got JSON-RPC error response 00:35:14.309 response: 00:35:14.309 { 00:35:14.309 "code": -13, 00:35:14.309 "message": "Permission denied" 00:35:14.309 } 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.309 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.567 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.567 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:14.567 16:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:15.500 16:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.500 16:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:15.500 16:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.500 16:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.500 16:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.500 16:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:15.500 16:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWUwNTIyMTYwNjdhYzQzMjRlMjRmYmRmZWE2YWFiZGE5M2ZjYzQ0MGZlNThjMjRl9JY2kQ==: 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: ]] 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDliOTNmMWJiYTQxODgwNGQ4ZWMzMzRlMmQ1ZDUzYTRlOTE1NjliNzI3NDc1NmNlAxibjA==: 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z rdma ]] 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_FIRST_TARGET_IP 00:35:16.443 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 192.168.100.8 ]] 00:35:16.702 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 192.168.100.8 00:35:16.702 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:16.702 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.702 16:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.702 nvme0n1 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2ZiYmU1MmM3YjA2MjkxNTY1YmExYmI5N2ZlYjIxYzChqfTT: 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: ]] 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjAxODE0ODBkOTg5ZDNhNTZkMjIyNTcwYTk5YjcwODmazf3M: 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.702 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.960 request: 00:35:16.960 { 00:35:16.960 "name": "nvme0", 00:35:16.960 "dhchap_key": "key2", 00:35:16.960 "dhchap_ctrlr_key": "ckey1", 00:35:16.960 "method": "bdev_nvme_set_keys", 00:35:16.960 "req_id": 1 00:35:16.960 } 00:35:16.960 Got JSON-RPC error response 00:35:16.960 response: 00:35:16.961 { 00:35:16.961 "code": -13, 00:35:16.961 "message": "Permission denied" 00:35:16.961 } 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:16.961 16:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:17.895 16:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.895 16:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:17.895 16:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.895 16:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.895 16:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.895 16:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:17.895 16:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:18.829 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.829 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:18.829 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.829 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.829 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:19.086 rmmod nvme_rdma 00:35:19.086 rmmod nvme_fabrics 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3526978 ']' 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3526978 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3526978 ']' 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3526978 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3526978 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:19.086 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3526978' 00:35:19.086 killing process with pid 3526978 00:35:19.087 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3526978 00:35:19.087 16:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3526978 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_rdma nvmet 00:35:20.020 16:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:35:23.304 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:23.304 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:25.207 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:35:25.207 16:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.FAj /tmp/spdk.key-null.24T /tmp/spdk.key-sha256.ZnV /tmp/spdk.key-sha384.B1i /tmp/spdk.key-sha512.l2l /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:35:25.207 16:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:35:28.489 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:28.489 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:28.489 00:35:28.489 real 1m2.601s 00:35:28.489 user 0m56.530s 00:35:28.489 sys 0m14.906s 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.489 ************************************ 00:35:28.489 END TEST nvmf_auth_host 00:35:28.489 ************************************ 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.489 ************************************ 00:35:28.489 START TEST nvmf_bdevperf 00:35:28.489 ************************************ 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:35:28.489 * Looking for test storage... 00:35:28.489 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:35:28.489 16:12:13 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:28.489 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:28.489 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:28.489 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:28.489 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:28.489 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:28.489 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:28.489 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:28.489 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:28.748 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:28.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.749 --rc genhtml_branch_coverage=1 00:35:28.749 --rc genhtml_function_coverage=1 00:35:28.749 --rc genhtml_legend=1 00:35:28.749 --rc geninfo_all_blocks=1 00:35:28.749 --rc geninfo_unexecuted_blocks=1 00:35:28.749 00:35:28.749 ' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:28.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.749 --rc genhtml_branch_coverage=1 00:35:28.749 --rc genhtml_function_coverage=1 00:35:28.749 --rc genhtml_legend=1 00:35:28.749 --rc geninfo_all_blocks=1 00:35:28.749 --rc geninfo_unexecuted_blocks=1 00:35:28.749 00:35:28.749 ' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:28.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.749 --rc genhtml_branch_coverage=1 00:35:28.749 --rc genhtml_function_coverage=1 00:35:28.749 --rc genhtml_legend=1 00:35:28.749 --rc geninfo_all_blocks=1 00:35:28.749 --rc geninfo_unexecuted_blocks=1 00:35:28.749 00:35:28.749 ' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:28.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.749 --rc genhtml_branch_coverage=1 00:35:28.749 --rc genhtml_function_coverage=1 00:35:28.749 --rc genhtml_legend=1 00:35:28.749 --rc geninfo_all_blocks=1 00:35:28.749 --rc geninfo_unexecuted_blocks=1 00:35:28.749 00:35:28.749 ' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:28.749 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:28.749 16:12:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:35:35.308 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:35:35.308 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:35:35.308 Found net devices under 0000:d9:00.0: mlx_0_0 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:35:35.308 Found net devices under 0000:d9:00.1: mlx_0_1 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # rdma_device_init 00:35:35.308 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@530 -- # allocate_nic_ips 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:35:35.309 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:35.309 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:35:35.309 altname enp217s0f0np0 00:35:35.309 altname ens818f0np0 00:35:35.309 inet 192.168.100.8/24 scope global mlx_0_0 00:35:35.309 valid_lft forever preferred_lft forever 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:35:35.309 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:35:35.309 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:35:35.309 altname enp217s0f1np1 00:35:35.309 altname ens818f1np1 00:35:35.309 inet 192.168.100.9/24 scope global mlx_0_1 00:35:35.309 valid_lft forever preferred_lft forever 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:35:35.309 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:35:35.569 192.168.100.9' 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:35:35.569 192.168.100.9' 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # head -n 1 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:35:35.569 192.168.100.9' 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # tail -n +2 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # head -n 1 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3542985 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3542985 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3542985 ']' 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.569 16:12:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:35.569 [2024-11-17 16:12:21.055607] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:35.569 [2024-11-17 16:12:21.055708] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.827 [2024-11-17 16:12:21.188883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:35.827 [2024-11-17 16:12:21.288624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.828 [2024-11-17 16:12:21.288674] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.828 [2024-11-17 16:12:21.288686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.828 [2024-11-17 16:12:21.288699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.828 [2024-11-17 16:12:21.288709] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.828 [2024-11-17 16:12:21.290964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:35.828 [2024-11-17 16:12:21.291028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.828 [2024-11-17 16:12:21.291036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:36.395 16:12:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.395 16:12:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:36.395 16:12:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:36.395 16:12:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:36.395 16:12:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.395 16:12:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.395 16:12:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:35:36.395 16:12:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.395 16:12:21 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.395 [2024-11-17 16:12:21.929899] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f8c2d11d940) succeed. 00:35:36.653 [2024-11-17 16:12:21.939217] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f8c2cfbd940) succeed. 00:35:36.653 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.653 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:36.653 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.653 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.911 Malloc0 00:35:36.911 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.911 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:36.911 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.911 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.911 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.911 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:36.912 [2024-11-17 16:12:22.237698] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:36.912 { 00:35:36.912 "params": { 00:35:36.912 "name": "Nvme$subsystem", 00:35:36.912 "trtype": "$TEST_TRANSPORT", 00:35:36.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.912 "adrfam": "ipv4", 00:35:36.912 "trsvcid": "$NVMF_PORT", 00:35:36.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.912 "hdgst": ${hdgst:-false}, 00:35:36.912 "ddgst": ${ddgst:-false} 00:35:36.912 }, 00:35:36.912 "method": "bdev_nvme_attach_controller" 00:35:36.912 } 00:35:36.912 EOF 00:35:36.912 )") 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:36.912 16:12:22 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:36.912 "params": { 00:35:36.912 "name": "Nvme1", 00:35:36.912 "trtype": "rdma", 00:35:36.912 "traddr": "192.168.100.8", 00:35:36.912 "adrfam": "ipv4", 00:35:36.912 "trsvcid": "4420", 00:35:36.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:36.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:36.912 "hdgst": false, 00:35:36.912 "ddgst": false 00:35:36.912 }, 00:35:36.912 "method": "bdev_nvme_attach_controller" 00:35:36.912 }' 00:35:36.912 [2024-11-17 16:12:22.323919] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:36.912 [2024-11-17 16:12:22.324014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543130 ] 00:35:37.170 [2024-11-17 16:12:22.458382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.170 [2024-11-17 16:12:22.565718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:37.735 Running I/O for 1 seconds... 00:35:38.699 15616.00 IOPS, 61.00 MiB/s 00:35:38.699 Latency(us) 00:35:38.699 [2024-11-17T15:12:24.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.699 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:38.699 Verification LBA range: start 0x0 length 0x4000 00:35:38.699 Nvme1n1 : 1.01 15666.37 61.20 0.00 0.00 8117.89 606.21 18140.36 00:35:38.699 [2024-11-17T15:12:24.244Z] =================================================================================================================== 00:35:38.700 [2024-11-17T15:12:24.244Z] Total : 15666.37 61.20 0.00 0.00 8117.89 606.21 18140.36 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3543593 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:39.686 { 00:35:39.686 "params": { 00:35:39.686 "name": "Nvme$subsystem", 00:35:39.686 "trtype": "$TEST_TRANSPORT", 00:35:39.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.686 "adrfam": "ipv4", 00:35:39.686 "trsvcid": "$NVMF_PORT", 00:35:39.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.686 "hdgst": ${hdgst:-false}, 00:35:39.686 "ddgst": ${ddgst:-false} 00:35:39.686 }, 00:35:39.686 "method": "bdev_nvme_attach_controller" 00:35:39.686 } 00:35:39.686 EOF 00:35:39.686 )") 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:35:39.686 16:12:24 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:39.686 "params": { 00:35:39.686 "name": "Nvme1", 00:35:39.686 "trtype": "rdma", 00:35:39.686 "traddr": "192.168.100.8", 00:35:39.686 "adrfam": "ipv4", 00:35:39.686 "trsvcid": "4420", 00:35:39.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:39.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:39.686 "hdgst": false, 00:35:39.686 "ddgst": false 00:35:39.686 }, 00:35:39.686 "method": "bdev_nvme_attach_controller" 00:35:39.686 }' 00:35:39.686 [2024-11-17 16:12:24.970768] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:39.686 [2024-11-17 16:12:24.970859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543593 ] 00:35:39.686 [2024-11-17 16:12:25.103020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.686 [2024-11-17 16:12:25.205528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.254 Running I/O for 15 seconds... 00:35:42.127 15616.00 IOPS, 61.00 MiB/s [2024-11-17T15:12:27.930Z] 15700.50 IOPS, 61.33 MiB/s [2024-11-17T15:12:27.930Z] 16:12:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3542985 00:35:42.386 16:12:27 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:35:43.582 11989.33 IOPS, 46.83 MiB/s [2024-11-17T15:12:29.126Z] [2024-11-17 16:12:28.939212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.582 [2024-11-17 16:12:28.939723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.582 [2024-11-17 16:12:28.939735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.939759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.939784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.939809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.939834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.939858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.939885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:26104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.939910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.939934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:26120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.939958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:26128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.939982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.939995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:26152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:26160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:26168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.583 [2024-11-17 16:12:28.940682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.583 [2024-11-17 16:12:28.940694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.940986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.940997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:26568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:26592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:43.584 [2024-11-17 16:12:28.941485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fd000 len:0x1000 key:0x180d00 00:35:43.584 [2024-11-17 16:12:28.941512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043fb000 len:0x1000 key:0x180d00 00:35:43.584 [2024-11-17 16:12:28.941537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f9000 len:0x1000 key:0x180d00 00:35:43.584 [2024-11-17 16:12:28.941562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f7000 len:0x1000 key:0x180d00 00:35:43.584 [2024-11-17 16:12:28.941599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f5000 len:0x1000 key:0x180d00 00:35:43.584 [2024-11-17 16:12:28.941624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f3000 len:0x1000 key:0x180d00 00:35:43.584 [2024-11-17 16:12:28.941648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.584 [2024-11-17 16:12:28.941662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f1000 len:0x1000 key:0x180d00 00:35:43.584 [2024-11-17 16:12:28.941673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ef000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:25664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ed000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043eb000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e9000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e7000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e5000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e3000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e1000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043df000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dd000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:25736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043db000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d9000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.941981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d7000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.941993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d5000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d3000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d1000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cf000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cd000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cb000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c9000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c7000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c5000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c3000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c1000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bf000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bd000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bb000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b9000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b7000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b5000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.942439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b3000 len:0x1000 key:0x180d00 00:35:43.585 [2024-11-17 16:12:28.942451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.944596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:35:43.585 [2024-11-17 16:12:28.944622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:35:43.585 [2024-11-17 16:12:28.944635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25904 len:8 PRP1 0x0 PRP2 0x0 00:35:43.585 [2024-11-17 16:12:28.944649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:43.585 [2024-11-17 16:12:28.947804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:43.585 [2024-11-17 16:12:28.974439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:35:43.585 [2024-11-17 16:12:28.978353] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:43.585 [2024-11-17 16:12:28.978383] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:43.585 [2024-11-17 16:12:28.978394] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:35:44.719 8992.00 IOPS, 35.12 MiB/s [2024-11-17T15:12:30.263Z] [2024-11-17 16:12:29.982740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:35:44.719 [2024-11-17 16:12:29.982815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:44.719 [2024-11-17 16:12:29.983368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:44.719 [2024-11-17 16:12:29.983382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:44.719 [2024-11-17 16:12:29.983395] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:35:44.719 [2024-11-17 16:12:29.983413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:44.719 [2024-11-17 16:12:29.987450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:44.719 [2024-11-17 16:12:29.991314] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:44.719 [2024-11-17 16:12:29.991339] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:44.719 [2024-11-17 16:12:29.991351] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:35:45.544 7193.60 IOPS, 28.10 MiB/s [2024-11-17T15:12:31.088Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3542985 Killed "${NVMF_APP[@]}" "$@" 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3544653 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3544653 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3544653 ']' 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.544 16:12:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:45.544 [2024-11-17 16:12:30.993132] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:35:45.544 [2024-11-17 16:12:30.993221] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.544 [2024-11-17 16:12:30.995456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:35:45.544 [2024-11-17 16:12:30.995494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:45.544 [2024-11-17 16:12:30.995708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:45.544 [2024-11-17 16:12:30.995726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:45.544 [2024-11-17 16:12:30.995740] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:35:45.544 [2024-11-17 16:12:30.995757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:45.544 [2024-11-17 16:12:31.000153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:45.544 [2024-11-17 16:12:31.003235] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:45.544 [2024-11-17 16:12:31.003263] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:45.544 [2024-11-17 16:12:31.003279] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:35:45.803 [2024-11-17 16:12:31.135108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:45.803 [2024-11-17 16:12:31.236036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.803 [2024-11-17 16:12:31.236081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.803 [2024-11-17 16:12:31.236093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.803 [2024-11-17 16:12:31.236106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.803 [2024-11-17 16:12:31.236116] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.803 [2024-11-17 16:12:31.238378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:45.803 [2024-11-17 16:12:31.238438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.803 [2024-11-17 16:12:31.238447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:46.370 5994.67 IOPS, 23.42 MiB/s [2024-11-17T15:12:31.914Z] 16:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.370 16:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:35:46.370 16:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:46.370 16:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:46.370 16:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.370 16:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:46.370 16:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:35:46.370 16:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.370 16:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.370 [2024-11-17 16:12:31.873776] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x612000028e40/0x7f81245bd940) succeed. 00:35:46.370 [2024-11-17 16:12:31.883142] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000028fc0/0x7f8124579940) succeed. 00:35:46.627 [2024-11-17 16:12:32.007358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:35:46.627 [2024-11-17 16:12:32.007406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:46.627 [2024-11-17 16:12:32.007616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:46.627 [2024-11-17 16:12:32.007633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:46.627 [2024-11-17 16:12:32.007648] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:35:46.627 [2024-11-17 16:12:32.007667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:46.628 [2024-11-17 16:12:32.012195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:46.628 [2024-11-17 16:12:32.015265] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:35:46.628 [2024-11-17 16:12:32.015293] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:35:46.628 [2024-11-17 16:12:32.015305] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000105ff800 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.628 Malloc0 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.628 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.884 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.884 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:35:46.884 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.884 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:46.884 [2024-11-17 16:12:32.175758] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:35:46.884 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.884 16:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3543593 00:35:47.709 5138.29 IOPS, 20.07 MiB/s [2024-11-17T15:12:33.253Z] [2024-11-17 16:12:33.019675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:35:47.709 [2024-11-17 16:12:33.019711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:35:47.709 [2024-11-17 16:12:33.019907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:35:47.709 [2024-11-17 16:12:33.019922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:35:47.709 [2024-11-17 16:12:33.019938] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] already in failed state 00:35:47.709 [2024-11-17 16:12:33.019955] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:35:47.709 [2024-11-17 16:12:33.025317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:35:47.709 [2024-11-17 16:12:33.065326] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:35:49.211 5578.62 IOPS, 21.79 MiB/s [2024-11-17T15:12:35.690Z] 6700.11 IOPS, 26.17 MiB/s [2024-11-17T15:12:37.066Z] 7598.90 IOPS, 29.68 MiB/s [2024-11-17T15:12:38.002Z] 8335.73 IOPS, 32.56 MiB/s [2024-11-17T15:12:38.938Z] 8952.67 IOPS, 34.97 MiB/s [2024-11-17T15:12:39.874Z] 9472.08 IOPS, 37.00 MiB/s [2024-11-17T15:12:40.812Z] 9916.93 IOPS, 38.74 MiB/s [2024-11-17T15:12:40.812Z] 10300.80 IOPS, 40.24 MiB/s 00:35:55.268 Latency(us) 00:35:55.268 [2024-11-17T15:12:40.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.268 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:35:55.268 Verification LBA range: start 0x0 length 0x4000 00:35:55.268 Nvme1n1 : 15.01 10301.92 40.24 12536.42 0.00 5583.55 743.83 1053609.16 00:35:55.268 [2024-11-17T15:12:40.812Z] =================================================================================================================== 00:35:55.268 [2024-11-17T15:12:40.812Z] Total : 10301.92 40.24 12536.42 0.00 5583.55 743.83 1053609.16 00:35:56.204 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:35:56.204 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:56.204 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.204 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:56.204 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.204 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:35:56.204 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:35:56.204 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:35:56.205 rmmod nvme_rdma 00:35:56.205 rmmod nvme_fabrics 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3544653 ']' 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3544653 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3544653 ']' 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3544653 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3544653 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3544653' 00:35:56.205 killing process with pid 3544653 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3544653 00:35:56.205 16:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3544653 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:35:58.111 00:35:58.111 real 0m29.489s 00:35:58.111 user 1m16.054s 00:35:58.111 sys 0m6.990s 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:35:58.111 ************************************ 00:35:58.111 END TEST nvmf_bdevperf 00:35:58.111 ************************************ 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:58.111 ************************************ 00:35:58.111 START TEST nvmf_target_disconnect 00:35:58.111 ************************************ 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:35:58.111 * Looking for test storage... 00:35:58.111 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:58.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.111 --rc genhtml_branch_coverage=1 00:35:58.111 --rc genhtml_function_coverage=1 00:35:58.111 --rc genhtml_legend=1 00:35:58.111 --rc geninfo_all_blocks=1 00:35:58.111 --rc geninfo_unexecuted_blocks=1 00:35:58.111 00:35:58.111 ' 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:58.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.111 --rc genhtml_branch_coverage=1 00:35:58.111 --rc genhtml_function_coverage=1 00:35:58.111 --rc genhtml_legend=1 00:35:58.111 --rc geninfo_all_blocks=1 00:35:58.111 --rc geninfo_unexecuted_blocks=1 00:35:58.111 00:35:58.111 ' 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:58.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.111 --rc genhtml_branch_coverage=1 00:35:58.111 --rc genhtml_function_coverage=1 00:35:58.111 --rc genhtml_legend=1 00:35:58.111 --rc geninfo_all_blocks=1 00:35:58.111 --rc geninfo_unexecuted_blocks=1 00:35:58.111 00:35:58.111 ' 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:58.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:58.111 --rc genhtml_branch_coverage=1 00:35:58.111 --rc genhtml_function_coverage=1 00:35:58.111 --rc genhtml_legend=1 00:35:58.111 --rc geninfo_all_blocks=1 00:35:58.111 --rc geninfo_unexecuted_blocks=1 00:35:58.111 00:35:58.111 ' 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.111 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:58.112 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:58.112 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:35:58.370 16:12:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:04.971 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:36:04.971 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:36:04.972 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:36:04.972 Found net devices under 0000:d9:00.0: mlx_0_0 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:36:04.972 Found net devices under 0000:d9:00.1: mlx_0_1 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:36:04.972 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # rdma_device_init 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@530 -- # allocate_nic_ips 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:36:04.973 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:36:04.974 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:04.974 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:36:04.974 altname enp217s0f0np0 00:36:04.974 altname ens818f0np0 00:36:04.974 inet 192.168.100.8/24 scope global mlx_0_0 00:36:04.974 valid_lft forever preferred_lft forever 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:36:04.974 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:04.974 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:36:04.974 altname enp217s0f1np1 00:36:04.974 altname ens818f1np1 00:36:04.974 inet 192.168.100.9/24 scope global mlx_0_1 00:36:04.974 valid_lft forever preferred_lft forever 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:04.974 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:04.983 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:36:04.983 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:36:04.983 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:36:04.983 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:04.983 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:36:04.984 192.168.100.9' 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:36:04.984 192.168.100.9' 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # head -n 1 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:36:04.984 192.168.100.9' 00:36:04.984 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # head -n 1 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # tail -n +2 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:04.985 ************************************ 00:36:04.985 START TEST nvmf_target_disconnect_tc1 00:36:04.985 ************************************ 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:04.985 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:04.986 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:04.986 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:04.986 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.986 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:04.986 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.986 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:04.986 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.986 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:36:04.986 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:36:04.986 16:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:05.248 [2024-11-17 16:12:50.609200] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:05.249 [2024-11-17 16:12:50.609261] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:05.249 [2024-11-17 16:12:50.609274] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d6ec0 00:36:06.184 [2024-11-17 16:12:51.613317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] CQ transport error -6 (No such device or address) on qpair id 0 00:36:06.184 [2024-11-17 16:12:51.613367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] in failed state. 00:36:06.184 [2024-11-17 16:12:51.613388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 0] Ctrlr is in error state 00:36:06.184 [2024-11-17 16:12:51.613447] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:06.184 [2024-11-17 16:12:51.613464] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:06.184 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:36:06.184 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:06.184 Initializing NVMe Controllers 00:36:06.184 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:06.184 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:06.184 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:06.184 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:06.184 00:36:06.184 real 0m1.321s 00:36:06.184 user 0m0.936s 00:36:06.184 sys 0m0.372s 00:36:06.184 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:06.184 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:06.184 ************************************ 00:36:06.184 END TEST nvmf_target_disconnect_tc1 00:36:06.184 ************************************ 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:06.443 ************************************ 00:36:06.443 START TEST nvmf_target_disconnect_tc2 00:36:06.443 ************************************ 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3550201 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3550201 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:06.443 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3550201 ']' 00:36:06.444 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.444 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:06.444 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.444 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:06.444 16:12:51 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:06.444 [2024-11-17 16:12:51.902446] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:06.444 [2024-11-17 16:12:51.902537] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.702 [2024-11-17 16:12:52.051344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:06.702 [2024-11-17 16:12:52.148685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.702 [2024-11-17 16:12:52.148734] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.702 [2024-11-17 16:12:52.148746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.702 [2024-11-17 16:12:52.148775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.702 [2024-11-17 16:12:52.148785] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.702 [2024-11-17 16:12:52.151231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:06.702 [2024-11-17 16:12:52.151325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:06.702 [2024-11-17 16:12:52.151393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:06.702 [2024-11-17 16:12:52.151419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:07.266 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.266 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:07.266 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:07.266 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.266 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.266 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:07.266 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:07.266 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.266 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.523 Malloc0 00:36:07.523 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.523 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:36:07.523 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.523 16:12:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.523 [2024-11-17 16:12:52.858073] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7f6754516940) succeed. 00:36:07.523 [2024-11-17 16:12:52.867897] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7f675371a940) succeed. 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.781 [2024-11-17 16:12:53.147690] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3550365 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:07.781 16:12:53 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:36:09.680 16:12:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3550201 00:36:09.680 16:12:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:11.056 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Read completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 Write completed with error (sct=0, sc=8) 00:36:11.057 starting I/O failed 00:36:11.057 [2024-11-17 16:12:56.449219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:11.993 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3550201 Killed "${NVMF_APP[@]}" "$@" 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3551075 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3551075 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3551075 ']' 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:11.993 16:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:11.993 [2024-11-17 16:12:57.266766] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:11.993 [2024-11-17 16:12:57.266876] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:11.993 [2024-11-17 16:12:57.423009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Write completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Write completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Write completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Write completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Write completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Write completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Read completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Write completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Write completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.993 Write completed with error (sct=0, sc=8) 00:36:11.993 starting I/O failed 00:36:11.994 Write completed with error (sct=0, sc=8) 00:36:11.994 starting I/O failed 00:36:11.994 Write completed with error (sct=0, sc=8) 00:36:11.994 starting I/O failed 00:36:11.994 Write completed with error (sct=0, sc=8) 00:36:11.994 starting I/O failed 00:36:11.994 Read completed with error (sct=0, sc=8) 00:36:11.994 starting I/O failed 00:36:11.994 Write completed with error (sct=0, sc=8) 00:36:11.994 starting I/O failed 00:36:11.994 Write completed with error (sct=0, sc=8) 00:36:11.994 starting I/O failed 00:36:11.994 Read completed with error (sct=0, sc=8) 00:36:11.994 starting I/O failed 00:36:11.994 Read completed with error (sct=0, sc=8) 00:36:11.994 starting I/O failed 00:36:11.994 [2024-11-17 16:12:57.454662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:11.994 [2024-11-17 16:12:57.524911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:11.994 [2024-11-17 16:12:57.524952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:11.994 [2024-11-17 16:12:57.524964] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:11.994 [2024-11-17 16:12:57.524978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:11.994 [2024-11-17 16:12:57.524988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:11.994 [2024-11-17 16:12:57.527460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:11.994 [2024-11-17 16:12:57.527554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:11.994 [2024-11-17 16:12:57.527626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:11.994 [2024-11-17 16:12:57.527651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:12.560 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.560 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:12.560 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:12.560 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:12.560 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.818 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:12.818 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:12.818 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.818 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.818 Malloc0 00:36:12.818 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.818 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:36:12.818 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.818 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:12.818 [2024-11-17 16:12:58.250204] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7fdcaa510940) succeed. 00:36:12.818 [2024-11-17 16:12:58.260140] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7fdcaa348940) succeed. 00:36:13.076 Read completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Write completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Write completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Write completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Write completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Read completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Write completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Write completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Write completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Read completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Read completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Read completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Read completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Write completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Read completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Write completed with error (sct=0, sc=8) 00:36:13.076 starting I/O failed 00:36:13.076 Read completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Read completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Write completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Read completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Write completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Write completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Read completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Read completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Read completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Write completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Read completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Write completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Read completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Read completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Read completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 Write completed with error (sct=0, sc=8) 00:36:13.077 starting I/O failed 00:36:13.077 [2024-11-17 16:12:58.460261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:13.077 [2024-11-17 16:12:58.462415] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:13.077 [2024-11-17 16:12:58.462455] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:13.077 [2024-11-17 16:12:58.462469] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.077 [2024-11-17 16:12:58.542419] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.077 16:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3550365 00:36:14.010 [2024-11-17 16:12:59.466380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.010 qpair failed and we were unable to recover it. 00:36:14.010 [2024-11-17 16:12:59.480615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.010 [2024-11-17 16:12:59.480726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.010 [2024-11-17 16:12:59.480762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.010 [2024-11-17 16:12:59.480782] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.010 [2024-11-17 16:12:59.480800] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.010 [2024-11-17 16:12:59.489349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.010 qpair failed and we were unable to recover it. 00:36:14.010 [2024-11-17 16:12:59.499397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.010 [2024-11-17 16:12:59.499478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.010 [2024-11-17 16:12:59.499504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.010 [2024-11-17 16:12:59.499521] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.010 [2024-11-17 16:12:59.499533] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.010 [2024-11-17 16:12:59.509416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.010 qpair failed and we were unable to recover it. 00:36:14.010 [2024-11-17 16:12:59.519266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.010 [2024-11-17 16:12:59.519339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.010 [2024-11-17 16:12:59.519370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.011 [2024-11-17 16:12:59.519385] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.011 [2024-11-17 16:12:59.519398] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.011 [2024-11-17 16:12:59.529652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.011 qpair failed and we were unable to recover it. 00:36:14.011 [2024-11-17 16:12:59.539343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.011 [2024-11-17 16:12:59.539416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.011 [2024-11-17 16:12:59.539443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.011 [2024-11-17 16:12:59.539468] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.011 [2024-11-17 16:12:59.539480] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.011 [2024-11-17 16:12:59.549586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.011 qpair failed and we were unable to recover it. 00:36:14.268 [2024-11-17 16:12:59.559367] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.268 [2024-11-17 16:12:59.559434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.268 [2024-11-17 16:12:59.559462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.268 [2024-11-17 16:12:59.559476] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.559490] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.569509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.579606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.579676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.579702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.579718] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.579730] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.589726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.599674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.599743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.599772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.599787] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.599801] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.609679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.619625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.619690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.619716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.619732] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.619746] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.631724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.639752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.639816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.639845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.639860] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.639873] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.650111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.659830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.659900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.659926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.659944] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.659956] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.670049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.679808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.679871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.679904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.679919] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.679932] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.690071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.699972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.700034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.700060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.700076] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.700089] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.709973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.720008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.720067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.720095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.720111] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.720127] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.730281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.739920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.739986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.740011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.740028] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.740040] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.750141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.760030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.760095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.760124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.760139] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.760152] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.770297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.779977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.780047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.780072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.780089] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.780101] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.790377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.269 [2024-11-17 16:12:59.800077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.269 [2024-11-17 16:12:59.800141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.269 [2024-11-17 16:12:59.800172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.269 [2024-11-17 16:12:59.800186] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.269 [2024-11-17 16:12:59.800200] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.269 [2024-11-17 16:12:59.810374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.269 qpair failed and we were unable to recover it. 00:36:14.531 [2024-11-17 16:12:59.820085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.531 [2024-11-17 16:12:59.820163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.531 [2024-11-17 16:12:59.820188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.531 [2024-11-17 16:12:59.820204] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.531 [2024-11-17 16:12:59.820216] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.531 [2024-11-17 16:12:59.830553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.531 qpair failed and we were unable to recover it. 00:36:14.531 [2024-11-17 16:12:59.840260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.531 [2024-11-17 16:12:59.840326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.531 [2024-11-17 16:12:59.840355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.531 [2024-11-17 16:12:59.840369] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.531 [2024-11-17 16:12:59.840385] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.531 [2024-11-17 16:12:59.850429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.531 qpair failed and we were unable to recover it. 00:36:14.531 [2024-11-17 16:12:59.860277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.531 [2024-11-17 16:12:59.860350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.531 [2024-11-17 16:12:59.860375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.531 [2024-11-17 16:12:59.860396] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.531 [2024-11-17 16:12:59.860408] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.531 [2024-11-17 16:12:59.870573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.531 qpair failed and we were unable to recover it. 00:36:14.531 [2024-11-17 16:12:59.880398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.531 [2024-11-17 16:12:59.880458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.531 [2024-11-17 16:12:59.880487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.531 [2024-11-17 16:12:59.880505] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.531 [2024-11-17 16:12:59.880518] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.531 [2024-11-17 16:12:59.890708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.532 qpair failed and we were unable to recover it. 00:36:14.532 [2024-11-17 16:12:59.900428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.532 [2024-11-17 16:12:59.900498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.532 [2024-11-17 16:12:59.900524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.532 [2024-11-17 16:12:59.900542] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.532 [2024-11-17 16:12:59.900553] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.532 [2024-11-17 16:12:59.910448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.532 qpair failed and we were unable to recover it. 00:36:14.532 [2024-11-17 16:12:59.920435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.532 [2024-11-17 16:12:59.920496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.532 [2024-11-17 16:12:59.920524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.532 [2024-11-17 16:12:59.920538] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.532 [2024-11-17 16:12:59.920552] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.532 [2024-11-17 16:12:59.933473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.532 qpair failed and we were unable to recover it. 00:36:14.532 [2024-11-17 16:12:59.940601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.532 [2024-11-17 16:12:59.940667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.532 [2024-11-17 16:12:59.940692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.532 [2024-11-17 16:12:59.940709] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.532 [2024-11-17 16:12:59.940721] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.532 [2024-11-17 16:12:59.950976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.532 qpair failed and we were unable to recover it. 00:36:14.532 [2024-11-17 16:12:59.960522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.532 [2024-11-17 16:12:59.960592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.532 [2024-11-17 16:12:59.960621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.532 [2024-11-17 16:12:59.960636] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.532 [2024-11-17 16:12:59.960650] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.532 [2024-11-17 16:12:59.971134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.532 qpair failed and we were unable to recover it. 00:36:14.532 [2024-11-17 16:12:59.980601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.532 [2024-11-17 16:12:59.980665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.532 [2024-11-17 16:12:59.980690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.532 [2024-11-17 16:12:59.980707] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.532 [2024-11-17 16:12:59.980718] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.532 [2024-11-17 16:12:59.990857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.532 qpair failed and we were unable to recover it. 00:36:14.532 [2024-11-17 16:13:00.000723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.532 [2024-11-17 16:13:00.000786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.532 [2024-11-17 16:13:00.000818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.532 [2024-11-17 16:13:00.000832] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.532 [2024-11-17 16:13:00.000846] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.532 [2024-11-17 16:13:00.010908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.532 qpair failed and we were unable to recover it. 00:36:14.532 [2024-11-17 16:13:00.020774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.532 [2024-11-17 16:13:00.020849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.532 [2024-11-17 16:13:00.020875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.532 [2024-11-17 16:13:00.020892] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.532 [2024-11-17 16:13:00.020904] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.532 [2024-11-17 16:13:00.031230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.532 qpair failed and we were unable to recover it. 00:36:14.532 [2024-11-17 16:13:00.040890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.532 [2024-11-17 16:13:00.040955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.532 [2024-11-17 16:13:00.040983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.532 [2024-11-17 16:13:00.040998] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.532 [2024-11-17 16:13:00.041015] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.532 [2024-11-17 16:13:00.051127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.532 qpair failed and we were unable to recover it. 00:36:14.532 [2024-11-17 16:13:00.060972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.532 [2024-11-17 16:13:00.061043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.532 [2024-11-17 16:13:00.061068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.532 [2024-11-17 16:13:00.061085] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.532 [2024-11-17 16:13:00.061097] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.827 [2024-11-17 16:13:00.071040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.827 qpair failed and we were unable to recover it. 00:36:14.827 [2024-11-17 16:13:00.081093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.827 [2024-11-17 16:13:00.081159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.827 [2024-11-17 16:13:00.081189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.827 [2024-11-17 16:13:00.081204] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.827 [2024-11-17 16:13:00.081218] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.827 [2024-11-17 16:13:00.091272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.827 qpair failed and we were unable to recover it. 00:36:14.827 [2024-11-17 16:13:00.101107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.827 [2024-11-17 16:13:00.101188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.827 [2024-11-17 16:13:00.101215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.827 [2024-11-17 16:13:00.101231] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.827 [2024-11-17 16:13:00.101243] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.827 [2024-11-17 16:13:00.111357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.827 qpair failed and we were unable to recover it. 00:36:14.827 [2024-11-17 16:13:00.121134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.827 [2024-11-17 16:13:00.121202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.827 [2024-11-17 16:13:00.121230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.827 [2024-11-17 16:13:00.121245] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.827 [2024-11-17 16:13:00.121259] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.827 [2024-11-17 16:13:00.131494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.827 qpair failed and we were unable to recover it. 00:36:14.827 [2024-11-17 16:13:00.141157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.827 [2024-11-17 16:13:00.141236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.827 [2024-11-17 16:13:00.141266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.827 [2024-11-17 16:13:00.141283] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.827 [2024-11-17 16:13:00.141295] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.827 [2024-11-17 16:13:00.151484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.827 qpair failed and we were unable to recover it. 00:36:14.827 [2024-11-17 16:13:00.161182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.827 [2024-11-17 16:13:00.161257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.827 [2024-11-17 16:13:00.161285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.827 [2024-11-17 16:13:00.161299] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.827 [2024-11-17 16:13:00.161313] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.827 [2024-11-17 16:13:00.171524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.827 qpair failed and we were unable to recover it. 00:36:14.827 [2024-11-17 16:13:00.181383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.827 [2024-11-17 16:13:00.181452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.828 [2024-11-17 16:13:00.181478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.828 [2024-11-17 16:13:00.181498] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.828 [2024-11-17 16:13:00.181510] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.828 [2024-11-17 16:13:00.191588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.828 qpair failed and we were unable to recover it. 00:36:14.828 [2024-11-17 16:13:00.201328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.828 [2024-11-17 16:13:00.201394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.828 [2024-11-17 16:13:00.201425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.828 [2024-11-17 16:13:00.201440] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.828 [2024-11-17 16:13:00.201454] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.828 [2024-11-17 16:13:00.211654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.828 qpair failed and we were unable to recover it. 00:36:14.828 [2024-11-17 16:13:00.221385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.828 [2024-11-17 16:13:00.221456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.828 [2024-11-17 16:13:00.221482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.828 [2024-11-17 16:13:00.221501] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.828 [2024-11-17 16:13:00.221513] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.828 [2024-11-17 16:13:00.232137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.828 qpair failed and we were unable to recover it. 00:36:14.828 [2024-11-17 16:13:00.241416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.828 [2024-11-17 16:13:00.241483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.828 [2024-11-17 16:13:00.241512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.828 [2024-11-17 16:13:00.241526] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.828 [2024-11-17 16:13:00.241540] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.828 [2024-11-17 16:13:00.251775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.828 qpair failed and we were unable to recover it. 00:36:14.828 [2024-11-17 16:13:00.261562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.828 [2024-11-17 16:13:00.261637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.828 [2024-11-17 16:13:00.261663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.828 [2024-11-17 16:13:00.261679] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.828 [2024-11-17 16:13:00.261691] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.828 [2024-11-17 16:13:00.271902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.828 qpair failed and we were unable to recover it. 00:36:14.828 [2024-11-17 16:13:00.281628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.828 [2024-11-17 16:13:00.281695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.828 [2024-11-17 16:13:00.281723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.828 [2024-11-17 16:13:00.281737] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.828 [2024-11-17 16:13:00.281751] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.828 [2024-11-17 16:13:00.291897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.828 qpair failed and we were unable to recover it. 00:36:14.828 [2024-11-17 16:13:00.301585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.828 [2024-11-17 16:13:00.301653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.828 [2024-11-17 16:13:00.301678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.828 [2024-11-17 16:13:00.301696] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.828 [2024-11-17 16:13:00.301707] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.828 [2024-11-17 16:13:00.312220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.828 qpair failed and we were unable to recover it. 00:36:14.828 [2024-11-17 16:13:00.321767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.828 [2024-11-17 16:13:00.321830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.828 [2024-11-17 16:13:00.321861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.828 [2024-11-17 16:13:00.321875] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.828 [2024-11-17 16:13:00.321889] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.828 [2024-11-17 16:13:00.332165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.828 qpair failed and we were unable to recover it. 00:36:14.828 [2024-11-17 16:13:00.341723] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.828 [2024-11-17 16:13:00.341787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.828 [2024-11-17 16:13:00.341813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.828 [2024-11-17 16:13:00.341829] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.828 [2024-11-17 16:13:00.341841] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:14.828 [2024-11-17 16:13:00.352122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:14.828 qpair failed and we were unable to recover it. 00:36:14.828 [2024-11-17 16:13:00.361816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:14.828 [2024-11-17 16:13:00.361876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:14.828 [2024-11-17 16:13:00.361905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:14.828 [2024-11-17 16:13:00.361919] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:14.828 [2024-11-17 16:13:00.361935] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.111 [2024-11-17 16:13:00.372159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.111 qpair failed and we were unable to recover it. 00:36:15.111 [2024-11-17 16:13:00.381858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.111 [2024-11-17 16:13:00.381927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.111 [2024-11-17 16:13:00.381954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.111 [2024-11-17 16:13:00.381970] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.111 [2024-11-17 16:13:00.381981] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.111 [2024-11-17 16:13:00.392230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.111 qpair failed and we were unable to recover it. 00:36:15.111 [2024-11-17 16:13:00.401824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.111 [2024-11-17 16:13:00.401889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.111 [2024-11-17 16:13:00.401917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.111 [2024-11-17 16:13:00.401932] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.111 [2024-11-17 16:13:00.401947] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.111 [2024-11-17 16:13:00.412363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.111 qpair failed and we were unable to recover it. 00:36:15.111 [2024-11-17 16:13:00.422060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.422129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.422154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.422171] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.422183] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.432312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.112 [2024-11-17 16:13:00.442102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.442169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.442198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.442212] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.442227] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.452371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.112 [2024-11-17 16:13:00.462108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.462181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.462206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.462223] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.462234] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.472417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.112 [2024-11-17 16:13:00.482293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.482358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.482388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.482402] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.482416] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.492574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.112 [2024-11-17 16:13:00.502243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.502312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.502337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.502356] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.502368] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.512610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.112 [2024-11-17 16:13:00.522303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.522360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.522389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.522402] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.522417] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.532798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.112 [2024-11-17 16:13:00.542336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.542404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.542429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.542446] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.542457] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.552961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.112 [2024-11-17 16:13:00.562424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.562491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.562519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.562533] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.562550] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.572837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.112 [2024-11-17 16:13:00.582418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.582481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.582507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.582523] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.582535] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.593000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.112 [2024-11-17 16:13:00.602486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.602550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.602583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.602597] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.602612] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.613158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.112 [2024-11-17 16:13:00.622614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.112 [2024-11-17 16:13:00.622684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.112 [2024-11-17 16:13:00.622709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.112 [2024-11-17 16:13:00.622725] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.112 [2024-11-17 16:13:00.622736] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.112 [2024-11-17 16:13:00.632857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.112 qpair failed and we were unable to recover it. 00:36:15.389 [2024-11-17 16:13:00.642595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.389 [2024-11-17 16:13:00.642658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.389 [2024-11-17 16:13:00.642689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.389 [2024-11-17 16:13:00.642703] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.389 [2024-11-17 16:13:00.642717] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.389 [2024-11-17 16:13:00.652996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.389 qpair failed and we were unable to recover it. 00:36:15.389 [2024-11-17 16:13:00.662742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.389 [2024-11-17 16:13:00.662808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.389 [2024-11-17 16:13:00.662833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.389 [2024-11-17 16:13:00.662851] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.389 [2024-11-17 16:13:00.662862] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.389 [2024-11-17 16:13:00.672950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.389 qpair failed and we were unable to recover it. 00:36:15.389 [2024-11-17 16:13:00.682820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.389 [2024-11-17 16:13:00.682886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.389 [2024-11-17 16:13:00.682914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.389 [2024-11-17 16:13:00.682928] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.389 [2024-11-17 16:13:00.682945] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.389 [2024-11-17 16:13:00.693679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.389 qpair failed and we were unable to recover it. 00:36:15.389 [2024-11-17 16:13:00.702817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.389 [2024-11-17 16:13:00.702888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.389 [2024-11-17 16:13:00.702914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.389 [2024-11-17 16:13:00.702929] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.389 [2024-11-17 16:13:00.702941] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.389 [2024-11-17 16:13:00.713128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.389 qpair failed and we were unable to recover it. 00:36:15.389 [2024-11-17 16:13:00.722899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.389 [2024-11-17 16:13:00.722966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.389 [2024-11-17 16:13:00.722991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.389 [2024-11-17 16:13:00.723006] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.389 [2024-11-17 16:13:00.723019] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.389 [2024-11-17 16:13:00.733277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.389 qpair failed and we were unable to recover it. 00:36:15.389 [2024-11-17 16:13:00.743056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.389 [2024-11-17 16:13:00.743121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.389 [2024-11-17 16:13:00.743147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.389 [2024-11-17 16:13:00.743161] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.389 [2024-11-17 16:13:00.743172] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.389 [2024-11-17 16:13:00.753431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.389 qpair failed and we were unable to recover it. 00:36:15.389 [2024-11-17 16:13:00.763067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.389 [2024-11-17 16:13:00.763127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.389 [2024-11-17 16:13:00.763154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.389 [2024-11-17 16:13:00.763167] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.389 [2024-11-17 16:13:00.763179] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.389 [2024-11-17 16:13:00.773269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.389 qpair failed and we were unable to recover it. 00:36:15.389 [2024-11-17 16:13:00.783094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.389 [2024-11-17 16:13:00.783159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.389 [2024-11-17 16:13:00.783184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.389 [2024-11-17 16:13:00.783198] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.389 [2024-11-17 16:13:00.783210] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.389 [2024-11-17 16:13:00.793165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.389 qpair failed and we were unable to recover it. 00:36:15.389 [2024-11-17 16:13:00.803214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.389 [2024-11-17 16:13:00.803279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.390 [2024-11-17 16:13:00.803305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.390 [2024-11-17 16:13:00.803320] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.390 [2024-11-17 16:13:00.803332] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.390 [2024-11-17 16:13:00.813435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.390 qpair failed and we were unable to recover it. 00:36:15.390 [2024-11-17 16:13:00.823190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.390 [2024-11-17 16:13:00.823251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.390 [2024-11-17 16:13:00.823280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.390 [2024-11-17 16:13:00.823295] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.390 [2024-11-17 16:13:00.823306] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.390 [2024-11-17 16:13:00.833568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.390 qpair failed and we were unable to recover it. 00:36:15.390 [2024-11-17 16:13:00.845078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.390 [2024-11-17 16:13:00.845140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.390 [2024-11-17 16:13:00.845165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.390 [2024-11-17 16:13:00.845179] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.390 [2024-11-17 16:13:00.845191] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.390 [2024-11-17 16:13:00.853541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.390 qpair failed and we were unable to recover it. 00:36:15.390 [2024-11-17 16:13:00.863299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.390 [2024-11-17 16:13:00.863356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.390 [2024-11-17 16:13:00.863382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.390 [2024-11-17 16:13:00.863396] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.390 [2024-11-17 16:13:00.863409] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.390 [2024-11-17 16:13:00.873460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.390 qpair failed and we were unable to recover it. 00:36:15.390 [2024-11-17 16:13:00.883383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.390 [2024-11-17 16:13:00.883443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.390 [2024-11-17 16:13:00.883469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.390 [2024-11-17 16:13:00.883483] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.390 [2024-11-17 16:13:00.883495] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.390 [2024-11-17 16:13:00.893650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.390 qpair failed and we were unable to recover it. 00:36:15.390 [2024-11-17 16:13:00.903395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.390 [2024-11-17 16:13:00.903455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.390 [2024-11-17 16:13:00.903480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.390 [2024-11-17 16:13:00.903494] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.390 [2024-11-17 16:13:00.903509] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.390 [2024-11-17 16:13:00.913819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.390 qpair failed and we were unable to recover it. 00:36:15.390 [2024-11-17 16:13:00.923488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.390 [2024-11-17 16:13:00.923543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.390 [2024-11-17 16:13:00.923574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.390 [2024-11-17 16:13:00.923589] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.390 [2024-11-17 16:13:00.923601] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.648 [2024-11-17 16:13:00.933782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.648 qpair failed and we were unable to recover it. 00:36:15.648 [2024-11-17 16:13:00.943571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.648 [2024-11-17 16:13:00.943630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.648 [2024-11-17 16:13:00.943656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.648 [2024-11-17 16:13:00.943670] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.648 [2024-11-17 16:13:00.943682] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.648 [2024-11-17 16:13:00.953955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.648 qpair failed and we were unable to recover it. 00:36:15.648 [2024-11-17 16:13:00.963634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.648 [2024-11-17 16:13:00.963701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.648 [2024-11-17 16:13:00.963726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.648 [2024-11-17 16:13:00.963740] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.648 [2024-11-17 16:13:00.963752] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.648 [2024-11-17 16:13:00.974110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.648 qpair failed and we were unable to recover it. 00:36:15.648 [2024-11-17 16:13:00.983570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.648 [2024-11-17 16:13:00.983629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.648 [2024-11-17 16:13:00.983654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.648 [2024-11-17 16:13:00.983668] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.648 [2024-11-17 16:13:00.983680] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.648 [2024-11-17 16:13:00.997922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.648 qpair failed and we were unable to recover it. 00:36:15.648 [2024-11-17 16:13:01.003795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.648 [2024-11-17 16:13:01.003857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.648 [2024-11-17 16:13:01.003882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.648 [2024-11-17 16:13:01.003896] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.648 [2024-11-17 16:13:01.003908] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.648 [2024-11-17 16:13:01.013896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.648 qpair failed and we were unable to recover it. 00:36:15.649 [2024-11-17 16:13:01.023794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.649 [2024-11-17 16:13:01.023857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.649 [2024-11-17 16:13:01.023882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.649 [2024-11-17 16:13:01.023896] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.649 [2024-11-17 16:13:01.023908] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.649 [2024-11-17 16:13:01.034120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.649 qpair failed and we were unable to recover it. 00:36:15.649 [2024-11-17 16:13:01.043917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.649 [2024-11-17 16:13:01.043979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.649 [2024-11-17 16:13:01.044005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.649 [2024-11-17 16:13:01.044019] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.649 [2024-11-17 16:13:01.044031] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.649 [2024-11-17 16:13:01.054176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.649 qpair failed and we were unable to recover it. 00:36:15.649 [2024-11-17 16:13:01.064008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.649 [2024-11-17 16:13:01.064066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.649 [2024-11-17 16:13:01.064091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.649 [2024-11-17 16:13:01.064104] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.649 [2024-11-17 16:13:01.064116] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.649 [2024-11-17 16:13:01.074198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.649 qpair failed and we were unable to recover it. 00:36:15.649 [2024-11-17 16:13:01.084000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.649 [2024-11-17 16:13:01.084067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.649 [2024-11-17 16:13:01.084093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.649 [2024-11-17 16:13:01.084107] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.649 [2024-11-17 16:13:01.084120] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.649 [2024-11-17 16:13:01.094269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.649 qpair failed and we were unable to recover it. 00:36:15.649 [2024-11-17 16:13:01.104051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.649 [2024-11-17 16:13:01.104110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.649 [2024-11-17 16:13:01.104135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.649 [2024-11-17 16:13:01.104149] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.649 [2024-11-17 16:13:01.104160] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.649 [2024-11-17 16:13:01.114372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.649 qpair failed and we were unable to recover it. 00:36:15.649 [2024-11-17 16:13:01.124277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.649 [2024-11-17 16:13:01.124339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.649 [2024-11-17 16:13:01.124364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.649 [2024-11-17 16:13:01.124379] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.649 [2024-11-17 16:13:01.124391] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.649 [2024-11-17 16:13:01.134285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.649 qpair failed and we were unable to recover it. 00:36:15.649 [2024-11-17 16:13:01.144172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.649 [2024-11-17 16:13:01.144232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.649 [2024-11-17 16:13:01.144257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.649 [2024-11-17 16:13:01.144272] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.649 [2024-11-17 16:13:01.144284] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.649 [2024-11-17 16:13:01.154323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.649 qpair failed and we were unable to recover it. 00:36:15.649 [2024-11-17 16:13:01.164228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.649 [2024-11-17 16:13:01.164288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.649 [2024-11-17 16:13:01.164318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.649 [2024-11-17 16:13:01.164333] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.649 [2024-11-17 16:13:01.164345] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.649 [2024-11-17 16:13:01.174592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.649 qpair failed and we were unable to recover it. 00:36:15.649 [2024-11-17 16:13:01.184378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.649 [2024-11-17 16:13:01.184441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.649 [2024-11-17 16:13:01.184466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.649 [2024-11-17 16:13:01.184480] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.649 [2024-11-17 16:13:01.184492] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.907 [2024-11-17 16:13:01.194405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.907 qpair failed and we were unable to recover it. 00:36:15.907 [2024-11-17 16:13:01.204396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.907 [2024-11-17 16:13:01.204458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.907 [2024-11-17 16:13:01.204483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.907 [2024-11-17 16:13:01.204498] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.907 [2024-11-17 16:13:01.204510] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.907 [2024-11-17 16:13:01.214650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.907 qpair failed and we were unable to recover it. 00:36:15.907 [2024-11-17 16:13:01.224376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.907 [2024-11-17 16:13:01.224433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.907 [2024-11-17 16:13:01.224457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.907 [2024-11-17 16:13:01.224471] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.907 [2024-11-17 16:13:01.224483] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.907 [2024-11-17 16:13:01.234572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.907 qpair failed and we were unable to recover it. 00:36:15.907 [2024-11-17 16:13:01.244393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.907 [2024-11-17 16:13:01.244450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.907 [2024-11-17 16:13:01.244475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.907 [2024-11-17 16:13:01.244489] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.907 [2024-11-17 16:13:01.244505] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.907 [2024-11-17 16:13:01.254702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.907 qpair failed and we were unable to recover it. 00:36:15.907 [2024-11-17 16:13:01.264428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.907 [2024-11-17 16:13:01.264487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.908 [2024-11-17 16:13:01.264512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.908 [2024-11-17 16:13:01.264526] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.908 [2024-11-17 16:13:01.264538] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.908 [2024-11-17 16:13:01.274576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.908 qpair failed and we were unable to recover it. 00:36:15.908 [2024-11-17 16:13:01.284570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.908 [2024-11-17 16:13:01.284625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.908 [2024-11-17 16:13:01.284651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.908 [2024-11-17 16:13:01.284665] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.908 [2024-11-17 16:13:01.284677] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.908 [2024-11-17 16:13:01.297409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.908 qpair failed and we were unable to recover it. 00:36:15.908 [2024-11-17 16:13:01.304598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.908 [2024-11-17 16:13:01.304657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.908 [2024-11-17 16:13:01.304682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.908 [2024-11-17 16:13:01.304697] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.908 [2024-11-17 16:13:01.304708] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.908 [2024-11-17 16:13:01.314991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.908 qpair failed and we were unable to recover it. 00:36:15.908 [2024-11-17 16:13:01.324775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.908 [2024-11-17 16:13:01.324840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.908 [2024-11-17 16:13:01.324865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.908 [2024-11-17 16:13:01.324879] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.908 [2024-11-17 16:13:01.324892] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.908 [2024-11-17 16:13:01.334928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.908 qpair failed and we were unable to recover it. 00:36:15.908 [2024-11-17 16:13:01.344699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.908 [2024-11-17 16:13:01.344756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.908 [2024-11-17 16:13:01.344781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.908 [2024-11-17 16:13:01.344795] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.908 [2024-11-17 16:13:01.344807] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.908 [2024-11-17 16:13:01.354892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.908 qpair failed and we were unable to recover it. 00:36:15.908 [2024-11-17 16:13:01.364778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.908 [2024-11-17 16:13:01.364833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.908 [2024-11-17 16:13:01.364859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.908 [2024-11-17 16:13:01.364873] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.908 [2024-11-17 16:13:01.364885] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.908 [2024-11-17 16:13:01.374868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.908 qpair failed and we were unable to recover it. 00:36:15.908 [2024-11-17 16:13:01.384722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.908 [2024-11-17 16:13:01.384782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.908 [2024-11-17 16:13:01.384808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.908 [2024-11-17 16:13:01.384822] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.908 [2024-11-17 16:13:01.384833] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.908 [2024-11-17 16:13:01.395109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.908 qpair failed and we were unable to recover it. 00:36:15.908 [2024-11-17 16:13:01.404957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.908 [2024-11-17 16:13:01.405016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.908 [2024-11-17 16:13:01.405041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.908 [2024-11-17 16:13:01.405055] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.908 [2024-11-17 16:13:01.405067] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.908 [2024-11-17 16:13:01.415005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.908 qpair failed and we were unable to recover it. 00:36:15.908 [2024-11-17 16:13:01.424867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.908 [2024-11-17 16:13:01.424933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.908 [2024-11-17 16:13:01.424959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.908 [2024-11-17 16:13:01.424974] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.908 [2024-11-17 16:13:01.424986] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:15.908 [2024-11-17 16:13:01.435096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:15.908 qpair failed and we were unable to recover it. 00:36:15.908 [2024-11-17 16:13:01.447316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:15.908 [2024-11-17 16:13:01.447379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:15.908 [2024-11-17 16:13:01.447405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:15.908 [2024-11-17 16:13:01.447419] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:15.908 [2024-11-17 16:13:01.447431] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.166 [2024-11-17 16:13:01.455333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.166 qpair failed and we were unable to recover it. 00:36:16.166 [2024-11-17 16:13:01.465015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.465078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.465104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.465119] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.465130] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.475154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.485043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.485104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.485129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.485143] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.485154] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.495294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.505867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.505928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.505953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.505971] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.505983] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.515537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.525267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.525332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.525358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.525372] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.525383] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.535481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.545291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.545350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.545376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.545390] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.545402] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.555461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.565352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.565409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.565435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.565449] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.565460] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.575740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.585490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.585552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.585583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.585597] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.585610] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.597659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.605441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.605502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.605527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.605541] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.605554] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.615570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.625559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.625623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.625649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.625663] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.625675] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.635584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.645577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.645632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.645657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.645671] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.645683] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.655727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.665786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.665843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.665869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.665883] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.665896] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.676180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.685796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.685851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.685877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.685891] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.685903] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.167 [2024-11-17 16:13:01.696089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.167 qpair failed and we were unable to recover it. 00:36:16.167 [2024-11-17 16:13:01.705821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.167 [2024-11-17 16:13:01.705880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.167 [2024-11-17 16:13:01.705905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.167 [2024-11-17 16:13:01.705918] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.167 [2024-11-17 16:13:01.705931] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.426 [2024-11-17 16:13:01.716085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.426 qpair failed and we were unable to recover it. 00:36:16.426 [2024-11-17 16:13:01.726012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.426 [2024-11-17 16:13:01.726071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.426 [2024-11-17 16:13:01.726096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.426 [2024-11-17 16:13:01.726110] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.426 [2024-11-17 16:13:01.726123] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.426 [2024-11-17 16:13:01.735905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.426 qpair failed and we were unable to recover it. 00:36:16.426 [2024-11-17 16:13:01.748064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.426 [2024-11-17 16:13:01.748128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.426 [2024-11-17 16:13:01.748154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.426 [2024-11-17 16:13:01.748168] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.426 [2024-11-17 16:13:01.748179] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.426 [2024-11-17 16:13:01.756194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.426 qpair failed and we were unable to recover it. 00:36:16.426 [2024-11-17 16:13:01.766024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.426 [2024-11-17 16:13:01.766082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.426 [2024-11-17 16:13:01.766110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.426 [2024-11-17 16:13:01.766124] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.426 [2024-11-17 16:13:01.766136] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.426 [2024-11-17 16:13:01.776314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.426 qpair failed and we were unable to recover it. 00:36:16.426 [2024-11-17 16:13:01.785985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.426 [2024-11-17 16:13:01.786046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.426 [2024-11-17 16:13:01.786071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.426 [2024-11-17 16:13:01.786085] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.426 [2024-11-17 16:13:01.786097] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.426 [2024-11-17 16:13:01.796327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.426 qpair failed and we were unable to recover it. 00:36:16.426 [2024-11-17 16:13:01.806093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.426 [2024-11-17 16:13:01.806148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.426 [2024-11-17 16:13:01.806175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.426 [2024-11-17 16:13:01.806189] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.426 [2024-11-17 16:13:01.806201] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.426 [2024-11-17 16:13:01.816437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.426 qpair failed and we were unable to recover it. 00:36:16.426 [2024-11-17 16:13:01.826139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.426 [2024-11-17 16:13:01.826197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.426 [2024-11-17 16:13:01.826222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.426 [2024-11-17 16:13:01.826236] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.426 [2024-11-17 16:13:01.826249] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.426 [2024-11-17 16:13:01.836354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.426 qpair failed and we were unable to recover it. 00:36:16.426 [2024-11-17 16:13:01.846175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.426 [2024-11-17 16:13:01.846244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.426 [2024-11-17 16:13:01.846269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.426 [2024-11-17 16:13:01.846291] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.426 [2024-11-17 16:13:01.846303] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.426 [2024-11-17 16:13:01.856465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.426 qpair failed and we were unable to recover it. 00:36:16.426 [2024-11-17 16:13:01.866377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.426 [2024-11-17 16:13:01.866434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.426 [2024-11-17 16:13:01.866460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.426 [2024-11-17 16:13:01.866473] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.426 [2024-11-17 16:13:01.866485] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.426 [2024-11-17 16:13:01.876615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.426 qpair failed and we were unable to recover it. 00:36:16.426 [2024-11-17 16:13:01.886335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.426 [2024-11-17 16:13:01.886392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.426 [2024-11-17 16:13:01.886417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.426 [2024-11-17 16:13:01.886431] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.426 [2024-11-17 16:13:01.886443] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.426 [2024-11-17 16:13:01.897942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.426 qpair failed and we were unable to recover it. 00:36:16.426 [2024-11-17 16:13:01.906393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.426 [2024-11-17 16:13:01.906454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.427 [2024-11-17 16:13:01.906480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.427 [2024-11-17 16:13:01.906494] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.427 [2024-11-17 16:13:01.906506] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.427 [2024-11-17 16:13:01.916629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.427 qpair failed and we were unable to recover it. 00:36:16.427 [2024-11-17 16:13:01.926470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.427 [2024-11-17 16:13:01.926528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.427 [2024-11-17 16:13:01.926553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.427 [2024-11-17 16:13:01.926582] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.427 [2024-11-17 16:13:01.926595] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.427 [2024-11-17 16:13:01.936886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.427 qpair failed and we were unable to recover it. 00:36:16.427 [2024-11-17 16:13:01.946537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.427 [2024-11-17 16:13:01.946600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.427 [2024-11-17 16:13:01.946625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.427 [2024-11-17 16:13:01.946639] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.427 [2024-11-17 16:13:01.946650] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.427 [2024-11-17 16:13:01.956674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.427 qpair failed and we were unable to recover it. 00:36:16.427 [2024-11-17 16:13:01.966625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.427 [2024-11-17 16:13:01.966686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.427 [2024-11-17 16:13:01.966712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.427 [2024-11-17 16:13:01.966726] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.427 [2024-11-17 16:13:01.966738] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.685 [2024-11-17 16:13:01.977068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-11-17 16:13:01.986644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.685 [2024-11-17 16:13:01.986703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.685 [2024-11-17 16:13:01.986729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.685 [2024-11-17 16:13:01.986743] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.685 [2024-11-17 16:13:01.986754] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.685 [2024-11-17 16:13:01.996875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-11-17 16:13:02.006672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.685 [2024-11-17 16:13:02.006735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.685 [2024-11-17 16:13:02.006760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.685 [2024-11-17 16:13:02.006775] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.685 [2024-11-17 16:13:02.006787] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.685 [2024-11-17 16:13:02.016934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-11-17 16:13:02.026859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.685 [2024-11-17 16:13:02.026917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.685 [2024-11-17 16:13:02.026942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.685 [2024-11-17 16:13:02.026957] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.685 [2024-11-17 16:13:02.026969] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.685 [2024-11-17 16:13:02.037016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-11-17 16:13:02.048671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.685 [2024-11-17 16:13:02.048744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.685 [2024-11-17 16:13:02.048770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.685 [2024-11-17 16:13:02.048784] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.685 [2024-11-17 16:13:02.048796] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.685 [2024-11-17 16:13:02.057125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.685 qpair failed and we were unable to recover it. 00:36:16.685 [2024-11-17 16:13:02.066825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.685 [2024-11-17 16:13:02.066894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.685 [2024-11-17 16:13:02.066920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.686 [2024-11-17 16:13:02.066934] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.686 [2024-11-17 16:13:02.066945] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.686 [2024-11-17 16:13:02.077075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-11-17 16:13:02.087036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.686 [2024-11-17 16:13:02.087103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.686 [2024-11-17 16:13:02.087128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.686 [2024-11-17 16:13:02.087142] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.686 [2024-11-17 16:13:02.087154] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.686 [2024-11-17 16:13:02.097280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-11-17 16:13:02.106977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.686 [2024-11-17 16:13:02.107037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.686 [2024-11-17 16:13:02.107067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.686 [2024-11-17 16:13:02.107081] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.686 [2024-11-17 16:13:02.107093] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.686 [2024-11-17 16:13:02.117299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-11-17 16:13:02.127133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.686 [2024-11-17 16:13:02.127196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.686 [2024-11-17 16:13:02.127221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.686 [2024-11-17 16:13:02.127235] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.686 [2024-11-17 16:13:02.127247] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.686 [2024-11-17 16:13:02.137246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-11-17 16:13:02.147143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.686 [2024-11-17 16:13:02.147205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.686 [2024-11-17 16:13:02.147230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.686 [2024-11-17 16:13:02.147244] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.686 [2024-11-17 16:13:02.147256] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.686 [2024-11-17 16:13:02.157360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-11-17 16:13:02.167258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.686 [2024-11-17 16:13:02.167326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.686 [2024-11-17 16:13:02.167351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.686 [2024-11-17 16:13:02.167366] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.686 [2024-11-17 16:13:02.167378] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.686 [2024-11-17 16:13:02.177469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-11-17 16:13:02.187215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.686 [2024-11-17 16:13:02.187273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.686 [2024-11-17 16:13:02.187299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.686 [2024-11-17 16:13:02.187317] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.686 [2024-11-17 16:13:02.187329] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.686 [2024-11-17 16:13:02.199098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.686 [2024-11-17 16:13:02.207364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.686 [2024-11-17 16:13:02.207425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.686 [2024-11-17 16:13:02.207451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.686 [2024-11-17 16:13:02.207465] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.686 [2024-11-17 16:13:02.207477] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.686 [2024-11-17 16:13:02.217594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.686 qpair failed and we were unable to recover it. 00:36:16.945 [2024-11-17 16:13:02.227414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.945 [2024-11-17 16:13:02.227472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.945 [2024-11-17 16:13:02.227497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.945 [2024-11-17 16:13:02.227512] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.945 [2024-11-17 16:13:02.227524] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.945 [2024-11-17 16:13:02.237652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.945 qpair failed and we were unable to recover it. 00:36:16.945 [2024-11-17 16:13:02.247396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.945 [2024-11-17 16:13:02.247459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.945 [2024-11-17 16:13:02.247485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.945 [2024-11-17 16:13:02.247499] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.945 [2024-11-17 16:13:02.247512] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.945 [2024-11-17 16:13:02.257762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.945 qpair failed and we were unable to recover it. 00:36:16.945 [2024-11-17 16:13:02.267643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.945 [2024-11-17 16:13:02.267702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.945 [2024-11-17 16:13:02.267727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.945 [2024-11-17 16:13:02.267741] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.945 [2024-11-17 16:13:02.267753] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.945 [2024-11-17 16:13:02.277866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.945 qpair failed and we were unable to recover it. 00:36:16.945 [2024-11-17 16:13:02.287545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.945 [2024-11-17 16:13:02.287615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.945 [2024-11-17 16:13:02.287641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.945 [2024-11-17 16:13:02.287655] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.945 [2024-11-17 16:13:02.287667] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.945 [2024-11-17 16:13:02.297902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.945 qpair failed and we were unable to recover it. 00:36:16.945 [2024-11-17 16:13:02.307667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.945 [2024-11-17 16:13:02.307732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.945 [2024-11-17 16:13:02.307757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.945 [2024-11-17 16:13:02.307771] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.945 [2024-11-17 16:13:02.307783] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.945 [2024-11-17 16:13:02.317745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.945 qpair failed and we were unable to recover it. 00:36:16.945 [2024-11-17 16:13:02.327720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.945 [2024-11-17 16:13:02.327778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.945 [2024-11-17 16:13:02.327804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.945 [2024-11-17 16:13:02.327819] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.945 [2024-11-17 16:13:02.327831] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.945 [2024-11-17 16:13:02.337897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.945 qpair failed and we were unable to recover it. 00:36:16.945 [2024-11-17 16:13:02.349149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.945 [2024-11-17 16:13:02.349210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.945 [2024-11-17 16:13:02.349235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.945 [2024-11-17 16:13:02.349249] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.945 [2024-11-17 16:13:02.349261] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.945 [2024-11-17 16:13:02.358059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.945 qpair failed and we were unable to recover it. 00:36:16.945 [2024-11-17 16:13:02.367852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.945 [2024-11-17 16:13:02.367914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.945 [2024-11-17 16:13:02.367939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.945 [2024-11-17 16:13:02.367953] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.945 [2024-11-17 16:13:02.367964] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.945 [2024-11-17 16:13:02.377954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.945 qpair failed and we were unable to recover it. 00:36:16.945 [2024-11-17 16:13:02.387953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.945 [2024-11-17 16:13:02.388014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.945 [2024-11-17 16:13:02.388039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.945 [2024-11-17 16:13:02.388053] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.945 [2024-11-17 16:13:02.388065] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.945 [2024-11-17 16:13:02.398150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.945 qpair failed and we were unable to recover it. 00:36:16.945 [2024-11-17 16:13:02.408075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.945 [2024-11-17 16:13:02.408137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.946 [2024-11-17 16:13:02.408162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.946 [2024-11-17 16:13:02.408177] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.946 [2024-11-17 16:13:02.408189] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.946 [2024-11-17 16:13:02.418212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.946 qpair failed and we were unable to recover it. 00:36:16.946 [2024-11-17 16:13:02.427898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.946 [2024-11-17 16:13:02.427964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.946 [2024-11-17 16:13:02.427990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.946 [2024-11-17 16:13:02.428004] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.946 [2024-11-17 16:13:02.428016] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.946 [2024-11-17 16:13:02.438290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.946 qpair failed and we were unable to recover it. 00:36:16.946 [2024-11-17 16:13:02.448062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.946 [2024-11-17 16:13:02.448116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.946 [2024-11-17 16:13:02.448146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.946 [2024-11-17 16:13:02.448159] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.946 [2024-11-17 16:13:02.448172] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.946 [2024-11-17 16:13:02.458452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.946 qpair failed and we were unable to recover it. 00:36:16.946 [2024-11-17 16:13:02.468143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:16.946 [2024-11-17 16:13:02.468200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:16.946 [2024-11-17 16:13:02.468226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:16.946 [2024-11-17 16:13:02.468239] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:16.946 [2024-11-17 16:13:02.468251] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:16.946 [2024-11-17 16:13:02.478313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:16.946 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-17 16:13:02.488115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.204 [2024-11-17 16:13:02.488181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.204 [2024-11-17 16:13:02.488207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.204 [2024-11-17 16:13:02.488220] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.204 [2024-11-17 16:13:02.488232] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.204 [2024-11-17 16:13:02.499356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-17 16:13:02.508247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.204 [2024-11-17 16:13:02.508304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.204 [2024-11-17 16:13:02.508329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.204 [2024-11-17 16:13:02.508343] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.204 [2024-11-17 16:13:02.508355] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.204 [2024-11-17 16:13:02.518574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-17 16:13:02.528243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.204 [2024-11-17 16:13:02.528303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.204 [2024-11-17 16:13:02.528329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.204 [2024-11-17 16:13:02.528343] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.204 [2024-11-17 16:13:02.528358] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.204 [2024-11-17 16:13:02.538560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-17 16:13:02.548365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.204 [2024-11-17 16:13:02.548421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.204 [2024-11-17 16:13:02.548446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.204 [2024-11-17 16:13:02.548460] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.204 [2024-11-17 16:13:02.548472] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.204 [2024-11-17 16:13:02.558521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-17 16:13:02.568405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.204 [2024-11-17 16:13:02.568463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.204 [2024-11-17 16:13:02.568489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.204 [2024-11-17 16:13:02.568502] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.204 [2024-11-17 16:13:02.568514] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.204 [2024-11-17 16:13:02.578742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-17 16:13:02.588403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.204 [2024-11-17 16:13:02.588464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.204 [2024-11-17 16:13:02.588489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.204 [2024-11-17 16:13:02.588503] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.204 [2024-11-17 16:13:02.588515] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.204 [2024-11-17 16:13:02.598520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-17 16:13:02.608524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.204 [2024-11-17 16:13:02.608584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.204 [2024-11-17 16:13:02.608610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.204 [2024-11-17 16:13:02.608624] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.204 [2024-11-17 16:13:02.608636] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.204 [2024-11-17 16:13:02.618627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-17 16:13:02.628597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.204 [2024-11-17 16:13:02.628659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.204 [2024-11-17 16:13:02.628684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.204 [2024-11-17 16:13:02.628698] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.204 [2024-11-17 16:13:02.628711] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.204 [2024-11-17 16:13:02.638704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.204 qpair failed and we were unable to recover it. 00:36:17.204 [2024-11-17 16:13:02.648771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.204 [2024-11-17 16:13:02.648833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.204 [2024-11-17 16:13:02.648859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.204 [2024-11-17 16:13:02.648873] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.204 [2024-11-17 16:13:02.648885] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.205 [2024-11-17 16:13:02.659163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-17 16:13:02.668729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.205 [2024-11-17 16:13:02.668790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.205 [2024-11-17 16:13:02.668816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.205 [2024-11-17 16:13:02.668829] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.205 [2024-11-17 16:13:02.668841] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.205 [2024-11-17 16:13:02.679068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-17 16:13:02.688832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.205 [2024-11-17 16:13:02.688897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.205 [2024-11-17 16:13:02.688922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.205 [2024-11-17 16:13:02.688937] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.205 [2024-11-17 16:13:02.688949] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.205 [2024-11-17 16:13:02.699231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-17 16:13:02.708791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.205 [2024-11-17 16:13:02.708860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.205 [2024-11-17 16:13:02.708886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.205 [2024-11-17 16:13:02.708900] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.205 [2024-11-17 16:13:02.708911] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.205 [2024-11-17 16:13:02.719194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.205 [2024-11-17 16:13:02.729139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.205 [2024-11-17 16:13:02.729201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.205 [2024-11-17 16:13:02.729226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.205 [2024-11-17 16:13:02.729240] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.205 [2024-11-17 16:13:02.729252] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.205 [2024-11-17 16:13:02.739341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.205 qpair failed and we were unable to recover it. 00:36:17.463 [2024-11-17 16:13:02.748867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.463 [2024-11-17 16:13:02.748930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.463 [2024-11-17 16:13:02.748955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.463 [2024-11-17 16:13:02.748970] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.463 [2024-11-17 16:13:02.748981] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.463 [2024-11-17 16:13:02.759301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.463 qpair failed and we were unable to recover it. 00:36:17.463 [2024-11-17 16:13:02.769039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.463 [2024-11-17 16:13:02.769105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.463 [2024-11-17 16:13:02.769131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.463 [2024-11-17 16:13:02.769144] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.463 [2024-11-17 16:13:02.769156] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.463 [2024-11-17 16:13:02.779389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.463 qpair failed and we were unable to recover it. 00:36:17.463 [2024-11-17 16:13:02.789223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.463 [2024-11-17 16:13:02.789287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.463 [2024-11-17 16:13:02.789316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.463 [2024-11-17 16:13:02.789330] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.463 [2024-11-17 16:13:02.789342] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.463 [2024-11-17 16:13:02.799819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.463 qpair failed and we were unable to recover it. 00:36:17.463 [2024-11-17 16:13:02.809227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.463 [2024-11-17 16:13:02.809290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.463 [2024-11-17 16:13:02.809316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.463 [2024-11-17 16:13:02.809330] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.463 [2024-11-17 16:13:02.809342] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.463 [2024-11-17 16:13:02.819279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.463 qpair failed and we were unable to recover it. 00:36:17.463 [2024-11-17 16:13:02.829216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.463 [2024-11-17 16:13:02.829276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.463 [2024-11-17 16:13:02.829301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.463 [2024-11-17 16:13:02.829315] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.463 [2024-11-17 16:13:02.829327] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.463 [2024-11-17 16:13:02.839517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.463 qpair failed and we were unable to recover it. 00:36:17.463 [2024-11-17 16:13:02.849327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.463 [2024-11-17 16:13:02.849386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.463 [2024-11-17 16:13:02.849411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.463 [2024-11-17 16:13:02.849426] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.463 [2024-11-17 16:13:02.849438] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.463 [2024-11-17 16:13:02.859529] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.463 qpair failed and we were unable to recover it. 00:36:17.464 [2024-11-17 16:13:02.869442] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.464 [2024-11-17 16:13:02.869499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.464 [2024-11-17 16:13:02.869525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.464 [2024-11-17 16:13:02.869538] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.464 [2024-11-17 16:13:02.869554] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.464 [2024-11-17 16:13:02.879778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.464 qpair failed and we were unable to recover it. 00:36:17.464 [2024-11-17 16:13:02.889453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.464 [2024-11-17 16:13:02.889511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.464 [2024-11-17 16:13:02.889537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.464 [2024-11-17 16:13:02.889551] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.464 [2024-11-17 16:13:02.889562] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.464 [2024-11-17 16:13:02.899726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.464 qpair failed and we were unable to recover it. 00:36:17.464 [2024-11-17 16:13:02.909454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.464 [2024-11-17 16:13:02.909513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.464 [2024-11-17 16:13:02.909539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.464 [2024-11-17 16:13:02.909553] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.464 [2024-11-17 16:13:02.909579] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.464 [2024-11-17 16:13:02.919590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.464 qpair failed and we were unable to recover it. 00:36:17.464 [2024-11-17 16:13:02.929540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.464 [2024-11-17 16:13:02.929600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.464 [2024-11-17 16:13:02.929625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.464 [2024-11-17 16:13:02.929639] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.464 [2024-11-17 16:13:02.929651] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.464 [2024-11-17 16:13:02.939684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.464 qpair failed and we were unable to recover it. 00:36:17.464 [2024-11-17 16:13:02.950595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.464 [2024-11-17 16:13:02.950659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.464 [2024-11-17 16:13:02.950684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.464 [2024-11-17 16:13:02.950698] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.464 [2024-11-17 16:13:02.950710] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.464 [2024-11-17 16:13:02.959893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.464 qpair failed and we were unable to recover it. 00:36:17.464 [2024-11-17 16:13:02.969683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.464 [2024-11-17 16:13:02.969741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.464 [2024-11-17 16:13:02.969766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.464 [2024-11-17 16:13:02.969780] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.464 [2024-11-17 16:13:02.969792] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.464 [2024-11-17 16:13:02.979716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.464 qpair failed and we were unable to recover it. 00:36:17.464 [2024-11-17 16:13:02.989703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.464 [2024-11-17 16:13:02.989761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.464 [2024-11-17 16:13:02.989787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.464 [2024-11-17 16:13:02.989801] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.464 [2024-11-17 16:13:02.989813] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.464 [2024-11-17 16:13:02.999988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.464 qpair failed and we were unable to recover it. 00:36:17.721 [2024-11-17 16:13:03.009796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.721 [2024-11-17 16:13:03.009858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.721 [2024-11-17 16:13:03.009883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.721 [2024-11-17 16:13:03.009898] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.721 [2024-11-17 16:13:03.009910] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.721 [2024-11-17 16:13:03.020113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.721 qpair failed and we were unable to recover it. 00:36:17.721 [2024-11-17 16:13:03.029931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.721 [2024-11-17 16:13:03.029995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.721 [2024-11-17 16:13:03.030021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.721 [2024-11-17 16:13:03.030035] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.721 [2024-11-17 16:13:03.030047] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.721 [2024-11-17 16:13:03.040086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.721 qpair failed and we were unable to recover it. 00:36:17.721 [2024-11-17 16:13:03.049854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.721 [2024-11-17 16:13:03.049926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.049951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.049965] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.049977] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.060344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.722 [2024-11-17 16:13:03.070053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.722 [2024-11-17 16:13:03.070111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.070137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.070151] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.070163] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.080311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.722 [2024-11-17 16:13:03.090110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.722 [2024-11-17 16:13:03.090166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.090192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.090206] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.090218] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.101277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.722 [2024-11-17 16:13:03.110140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.722 [2024-11-17 16:13:03.110202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.110228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.110242] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.110254] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.120451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.722 [2024-11-17 16:13:03.130270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.722 [2024-11-17 16:13:03.130329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.130354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.130372] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.130384] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.140230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.722 [2024-11-17 16:13:03.150320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.722 [2024-11-17 16:13:03.150379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.150405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.150419] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.150431] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.160549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.722 [2024-11-17 16:13:03.170164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.722 [2024-11-17 16:13:03.170221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.170247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.170261] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.170273] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.180469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.722 [2024-11-17 16:13:03.190298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.722 [2024-11-17 16:13:03.190358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.190383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.190397] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.190409] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.200516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.722 [2024-11-17 16:13:03.210333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.722 [2024-11-17 16:13:03.210396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.210422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.210437] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.210453] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.220446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.722 [2024-11-17 16:13:03.230496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.722 [2024-11-17 16:13:03.230559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.230591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.230605] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.230617] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.240792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.722 [2024-11-17 16:13:03.250542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.722 [2024-11-17 16:13:03.250605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.722 [2024-11-17 16:13:03.250632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.722 [2024-11-17 16:13:03.250647] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.722 [2024-11-17 16:13:03.250660] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.722 [2024-11-17 16:13:03.260648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.722 qpair failed and we were unable to recover it. 00:36:17.980 [2024-11-17 16:13:03.270598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.980 [2024-11-17 16:13:03.270661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.980 [2024-11-17 16:13:03.270687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.980 [2024-11-17 16:13:03.270701] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.980 [2024-11-17 16:13:03.270713] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.980 [2024-11-17 16:13:03.281007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.980 qpair failed and we were unable to recover it. 00:36:17.980 [2024-11-17 16:13:03.290504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.980 [2024-11-17 16:13:03.290562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.980 [2024-11-17 16:13:03.290602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.980 [2024-11-17 16:13:03.290616] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.980 [2024-11-17 16:13:03.290628] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.980 [2024-11-17 16:13:03.300722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.980 qpair failed and we were unable to recover it. 00:36:17.980 [2024-11-17 16:13:03.310736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.980 [2024-11-17 16:13:03.310796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.980 [2024-11-17 16:13:03.310822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.980 [2024-11-17 16:13:03.310836] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.980 [2024-11-17 16:13:03.310848] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.980 [2024-11-17 16:13:03.320864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.980 qpair failed and we were unable to recover it. 00:36:17.980 [2024-11-17 16:13:03.330687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.980 [2024-11-17 16:13:03.330749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.980 [2024-11-17 16:13:03.330774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.980 [2024-11-17 16:13:03.330788] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.980 [2024-11-17 16:13:03.330800] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.980 [2024-11-17 16:13:03.341396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.980 qpair failed and we were unable to recover it. 00:36:17.980 [2024-11-17 16:13:03.350721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.980 [2024-11-17 16:13:03.350786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.980 [2024-11-17 16:13:03.350811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.980 [2024-11-17 16:13:03.350825] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.980 [2024-11-17 16:13:03.350837] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.980 [2024-11-17 16:13:03.361238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.980 qpair failed and we were unable to recover it. 00:36:17.980 [2024-11-17 16:13:03.370819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.980 [2024-11-17 16:13:03.370884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.980 [2024-11-17 16:13:03.370910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.980 [2024-11-17 16:13:03.370925] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.980 [2024-11-17 16:13:03.370937] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.980 [2024-11-17 16:13:03.381062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.980 qpair failed and we were unable to recover it. 00:36:17.980 [2024-11-17 16:13:03.390863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.980 [2024-11-17 16:13:03.390924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.981 [2024-11-17 16:13:03.390952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.981 [2024-11-17 16:13:03.390967] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.981 [2024-11-17 16:13:03.390979] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.981 [2024-11-17 16:13:03.402620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.981 qpair failed and we were unable to recover it. 00:36:17.981 [2024-11-17 16:13:03.411022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.981 [2024-11-17 16:13:03.411083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.981 [2024-11-17 16:13:03.411109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.981 [2024-11-17 16:13:03.411123] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.981 [2024-11-17 16:13:03.411135] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.981 [2024-11-17 16:13:03.421470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.981 qpair failed and we were unable to recover it. 00:36:17.981 [2024-11-17 16:13:03.431114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.981 [2024-11-17 16:13:03.431180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.981 [2024-11-17 16:13:03.431206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.981 [2024-11-17 16:13:03.431220] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.981 [2024-11-17 16:13:03.431231] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.981 [2024-11-17 16:13:03.441367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.981 qpair failed and we were unable to recover it. 00:36:17.981 [2024-11-17 16:13:03.451120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.981 [2024-11-17 16:13:03.451180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.981 [2024-11-17 16:13:03.451205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.981 [2024-11-17 16:13:03.451220] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.981 [2024-11-17 16:13:03.451232] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.981 [2024-11-17 16:13:03.461618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.981 qpair failed and we were unable to recover it. 00:36:17.981 [2024-11-17 16:13:03.471241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.981 [2024-11-17 16:13:03.471299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.981 [2024-11-17 16:13:03.471325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.981 [2024-11-17 16:13:03.471344] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.981 [2024-11-17 16:13:03.471356] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.981 [2024-11-17 16:13:03.481468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.981 qpair failed and we were unable to recover it. 00:36:17.981 [2024-11-17 16:13:03.491242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.981 [2024-11-17 16:13:03.491305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.981 [2024-11-17 16:13:03.491331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.981 [2024-11-17 16:13:03.491344] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.981 [2024-11-17 16:13:03.491356] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.981 [2024-11-17 16:13:03.501528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.981 qpair failed and we were unable to recover it. 00:36:17.981 [2024-11-17 16:13:03.511276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:17.981 [2024-11-17 16:13:03.511335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:17.981 [2024-11-17 16:13:03.511361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:17.981 [2024-11-17 16:13:03.511382] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:17.981 [2024-11-17 16:13:03.511395] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:17.981 [2024-11-17 16:13:03.521492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:17.981 qpair failed and we were unable to recover it. 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Read completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 Write completed with error (sct=0, sc=8) 00:36:19.353 starting I/O failed 00:36:19.353 [2024-11-17 16:13:04.529688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.353 [2024-11-17 16:13:04.534390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.353 [2024-11-17 16:13:04.534470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.353 [2024-11-17 16:13:04.534501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.353 [2024-11-17 16:13:04.534518] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.353 [2024-11-17 16:13:04.534533] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:36:19.353 [2024-11-17 16:13:04.544658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.353 qpair failed and we were unable to recover it. 00:36:19.353 [2024-11-17 16:13:04.554426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.353 [2024-11-17 16:13:04.554499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.353 [2024-11-17 16:13:04.554525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.353 [2024-11-17 16:13:04.554543] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.353 [2024-11-17 16:13:04.554556] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:36:19.353 [2024-11-17 16:13:04.564528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:19.353 qpair failed and we were unable to recover it. 00:36:19.353 [2024-11-17 16:13:04.564847] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:19.353 A controller has encountered a failure and is being reset. 00:36:19.353 [2024-11-17 16:13:04.574615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.353 [2024-11-17 16:13:04.574692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.353 [2024-11-17 16:13:04.574731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.354 [2024-11-17 16:13:04.574752] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.354 [2024-11-17 16:13:04.574771] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:36:19.354 [2024-11-17 16:13:04.584642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.354 qpair failed and we were unable to recover it. 00:36:19.354 [2024-11-17 16:13:04.594631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.354 [2024-11-17 16:13:04.594702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.354 [2024-11-17 16:13:04.594729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.354 [2024-11-17 16:13:04.594745] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.354 [2024-11-17 16:13:04.594760] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:36:19.354 [2024-11-17 16:13:04.604839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:19.354 qpair failed and we were unable to recover it. 00:36:19.354 [2024-11-17 16:13:04.614600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.354 [2024-11-17 16:13:04.614669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.354 [2024-11-17 16:13:04.614700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.354 [2024-11-17 16:13:04.614715] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.354 [2024-11-17 16:13:04.614729] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:36:19.354 [2024-11-17 16:13:04.624719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.354 qpair failed and we were unable to recover it. 00:36:19.354 [2024-11-17 16:13:04.634695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:19.354 [2024-11-17 16:13:04.634763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:19.354 [2024-11-17 16:13:04.634789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:19.354 [2024-11-17 16:13:04.634806] nvme_rdma.c:1332:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:19.354 [2024-11-17 16:13:04.634817] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:36:19.354 [2024-11-17 16:13:04.644857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:19.354 qpair failed and we were unable to recover it. 00:36:19.354 [2024-11-17 16:13:04.645145] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:36:19.354 [2024-11-17 16:13:04.691362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 0 00:36:19.354 Controller properly reset. 00:36:19.354 Initializing NVMe Controllers 00:36:19.354 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:19.354 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:19.354 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:19.354 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:19.354 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:19.354 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:19.354 Initialization complete. Launching workers. 00:36:19.354 Starting thread on core 1 00:36:19.354 Starting thread on core 2 00:36:19.354 Starting thread on core 3 00:36:19.354 Starting thread on core 0 00:36:19.611 16:13:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:19.611 00:36:19.611 real 0m13.120s 00:36:19.611 user 0m28.364s 00:36:19.611 sys 0m2.966s 00:36:19.611 16:13:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:19.611 16:13:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:19.611 ************************************ 00:36:19.611 END TEST nvmf_target_disconnect_tc2 00:36:19.612 ************************************ 00:36:19.612 16:13:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:36:19.612 16:13:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:36:19.612 16:13:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:19.612 16:13:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:19.612 16:13:04 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:19.612 ************************************ 00:36:19.612 START TEST nvmf_target_disconnect_tc3 00:36:19.612 ************************************ 00:36:19.612 16:13:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc3 00:36:19.612 16:13:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=3552444 00:36:19.612 16:13:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:36:19.612 16:13:05 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:36:21.508 16:13:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 3551075 00:36:21.508 16:13:07 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Write completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 Read completed with error (sct=0, sc=8) 00:36:22.886 starting I/O failed 00:36:22.886 [2024-11-17 16:13:08.293382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:36:23.836 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 3551075 Killed "${NVMF_APP[@]}" "$@" 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3552996 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3552996 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3552996 ']' 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.836 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:23.836 [2024-11-17 16:13:09.118558] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:23.836 [2024-11-17 16:13:09.118667] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.836 [2024-11-17 16:13:09.279452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Write completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 Read completed with error (sct=0, sc=8) 00:36:23.836 starting I/O failed 00:36:23.836 [2024-11-17 16:13:09.298844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:36:24.094 [2024-11-17 16:13:09.386016] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.094 [2024-11-17 16:13:09.386061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.094 [2024-11-17 16:13:09.386074] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.094 [2024-11-17 16:13:09.386088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.094 [2024-11-17 16:13:09.386099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.094 [2024-11-17 16:13:09.388714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:24.094 [2024-11-17 16:13:09.388809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:24.094 [2024-11-17 16:13:09.388901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:24.094 [2024-11-17 16:13:09.388927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:24.660 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.660 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@868 -- # return 0 00:36:24.660 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:24.660 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:24.660 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:24.660 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:24.660 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:24.660 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.660 16:13:09 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:24.660 Malloc0 00:36:24.660 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.660 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:36:24.660 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.660 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:24.660 [2024-11-17 16:13:10.083222] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6120000298c0/0x7f7b9c37d940) succeed. 00:36:24.660 [2024-11-17 16:13:10.093109] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x612000029a40/0x7f7b9c339940) succeed. 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Write completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 Read completed with error (sct=0, sc=8) 00:36:24.917 starting I/O failed 00:36:24.917 [2024-11-17 16:13:10.304413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:24.917 [2024-11-17 16:13:10.382666] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.917 16:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 3552444 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Write completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 Read completed with error (sct=0, sc=8) 00:36:25.849 starting I/O failed 00:36:25.849 [2024-11-17 16:13:11.309969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:36:25.849 [2024-11-17 16:13:11.312025] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:25.849 [2024-11-17 16:13:11.312067] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:25.849 [2024-11-17 16:13:11.312080] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:36:26.781 [2024-11-17 16:13:12.316289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:36:26.781 qpair failed and we were unable to recover it. 00:36:26.781 [2024-11-17 16:13:12.318035] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:26.781 [2024-11-17 16:13:12.318065] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:26.781 [2024-11-17 16:13:12.318078] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:36:28.153 [2024-11-17 16:13:13.322307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:36:28.153 qpair failed and we were unable to recover it. 00:36:28.153 [2024-11-17 16:13:13.324247] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:28.153 [2024-11-17 16:13:13.324278] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:28.153 [2024-11-17 16:13:13.324291] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:36:29.084 [2024-11-17 16:13:14.328326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:36:29.084 qpair failed and we were unable to recover it. 00:36:29.084 [2024-11-17 16:13:14.330464] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:29.084 [2024-11-17 16:13:14.330508] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:29.084 [2024-11-17 16:13:14.330526] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:30.016 [2024-11-17 16:13:15.334982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:36:30.016 qpair failed and we were unable to recover it. 00:36:30.016 [2024-11-17 16:13:15.336877] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:30.016 [2024-11-17 16:13:15.336906] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:30.016 [2024-11-17 16:13:15.336919] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3cc0 00:36:30.949 [2024-11-17 16:13:16.341111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 3 00:36:30.949 qpair failed and we were unable to recover it. 00:36:30.949 [2024-11-17 16:13:16.343674] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:30.949 [2024-11-17 16:13:16.343720] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:30.949 [2024-11-17 16:13:16.343737] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:36:31.882 [2024-11-17 16:13:17.347851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:36:31.882 qpair failed and we were unable to recover it. 00:36:31.882 [2024-11-17 16:13:17.349681] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:31.882 [2024-11-17 16:13:17.349709] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:31.882 [2024-11-17 16:13:17.349722] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4940 00:36:32.811 [2024-11-17 16:13:18.353813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 1 00:36:32.811 qpair failed and we were unable to recover it. 00:36:33.069 [2024-11-17 16:13:18.355896] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:33.069 [2024-11-17 16:13:18.355949] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:33.069 [2024-11-17 16:13:18.355967] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:36:34.001 [2024-11-17 16:13:19.360225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.001 qpair failed and we were unable to recover it. 00:36:34.001 [2024-11-17 16:13:19.362031] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:34.001 [2024-11-17 16:13:19.362062] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:34.001 [2024-11-17 16:13:19.362074] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3140 00:36:34.932 [2024-11-17 16:13:20.366225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.932 qpair failed and we were unable to recover it. 00:36:34.932 [2024-11-17 16:13:20.368306] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:34.932 [2024-11-17 16:13:20.368342] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:34.932 [2024-11-17 16:13:20.368355] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:36:35.864 [2024-11-17 16:13:21.372524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:36:35.864 qpair failed and we were unable to recover it. 00:36:35.864 [2024-11-17 16:13:21.374469] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:36:35.864 [2024-11-17 16:13:21.374498] nvme_rdma.c:1077:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:36:35.864 [2024-11-17 16:13:21.374511] nvme_rdma.c:2701:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cfb40 00:36:37.236 [2024-11-17 16:13:22.378569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 2 00:36:37.236 qpair failed and we were unable to recover it. 00:36:37.236 [2024-11-17 16:13:22.379268] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Submitting Keep Alive failed 00:36:37.236 A controller has encountered a failure and is being reset. 00:36:37.236 Resorting to new failover address 192.168.100.9 00:36:37.236 [2024-11-17 16:13:22.379394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:37.236 [2024-11-17 16:13:22.379494] nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:36:37.236 [2024-11-17 16:13:22.420676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] CQ transport error -6 (No such device or address) on qpair id 0 00:36:37.236 Controller properly reset. 00:36:37.236 Initializing NVMe Controllers 00:36:37.236 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:37.236 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:36:37.236 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:37.236 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:37.236 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:37.236 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:37.236 Initialization complete. Launching workers. 00:36:37.236 Starting thread on core 1 00:36:37.236 Starting thread on core 2 00:36:37.236 Starting thread on core 3 00:36:37.236 Starting thread on core 0 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:36:37.236 00:36:37.236 real 0m17.637s 00:36:37.236 user 1m2.871s 00:36:37.236 sys 0m4.601s 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:36:37.236 ************************************ 00:36:37.236 END TEST nvmf_target_disconnect_tc3 00:36:37.236 ************************************ 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:37.236 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:36:37.236 rmmod nvme_rdma 00:36:37.236 rmmod nvme_fabrics 00:36:37.237 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:37.237 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:37.237 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:37.237 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3552996 ']' 00:36:37.237 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3552996 00:36:37.237 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3552996 ']' 00:36:37.237 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3552996 00:36:37.237 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:37.237 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:37.237 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3552996 00:36:37.494 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:37.494 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:37.494 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3552996' 00:36:37.494 killing process with pid 3552996 00:36:37.494 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3552996 00:36:37.494 16:13:22 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3552996 00:36:39.392 16:13:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.392 16:13:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:36:39.392 00:36:39.392 real 0m41.218s 00:36:39.392 user 2m33.010s 00:36:39.392 sys 0m13.807s 00:36:39.392 16:13:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.392 16:13:24 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:39.392 ************************************ 00:36:39.392 END TEST nvmf_target_disconnect 00:36:39.392 ************************************ 00:36:39.392 16:13:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:39.392 00:36:39.392 real 7m58.869s 00:36:39.392 user 22m55.540s 00:36:39.392 sys 1m48.373s 00:36:39.392 16:13:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.392 16:13:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.392 ************************************ 00:36:39.392 END TEST nvmf_host 00:36:39.392 ************************************ 00:36:39.393 16:13:24 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:36:39.393 00:36:39.393 real 29m33.597s 00:36:39.393 user 86m46.883s 00:36:39.393 sys 6m52.222s 00:36:39.393 16:13:24 nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:39.393 16:13:24 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:39.393 ************************************ 00:36:39.393 END TEST nvmf_rdma 00:36:39.393 ************************************ 00:36:39.393 16:13:24 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:36:39.393 16:13:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:39.393 16:13:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:39.393 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:36:39.393 ************************************ 00:36:39.393 START TEST spdkcli_nvmf_rdma 00:36:39.393 ************************************ 00:36:39.393 16:13:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:36:39.393 * Looking for test storage... 00:36:39.393 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:36:39.393 16:13:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:39.393 16:13:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lcov --version 00:36:39.393 16:13:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:39.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.653 --rc genhtml_branch_coverage=1 00:36:39.653 --rc genhtml_function_coverage=1 00:36:39.653 --rc genhtml_legend=1 00:36:39.653 --rc geninfo_all_blocks=1 00:36:39.653 --rc geninfo_unexecuted_blocks=1 00:36:39.653 00:36:39.653 ' 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:39.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.653 --rc genhtml_branch_coverage=1 00:36:39.653 --rc genhtml_function_coverage=1 00:36:39.653 --rc genhtml_legend=1 00:36:39.653 --rc geninfo_all_blocks=1 00:36:39.653 --rc geninfo_unexecuted_blocks=1 00:36:39.653 00:36:39.653 ' 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:39.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.653 --rc genhtml_branch_coverage=1 00:36:39.653 --rc genhtml_function_coverage=1 00:36:39.653 --rc genhtml_legend=1 00:36:39.653 --rc geninfo_all_blocks=1 00:36:39.653 --rc geninfo_unexecuted_blocks=1 00:36:39.653 00:36:39.653 ' 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:39.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:39.653 --rc genhtml_branch_coverage=1 00:36:39.653 --rc genhtml_function_coverage=1 00:36:39.653 --rc genhtml_legend=1 00:36:39.653 --rc geninfo_all_blocks=1 00:36:39.653 --rc geninfo_unexecuted_blocks=1 00:36:39.653 00:36:39.653 ' 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:36:39.653 16:13:24 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:39.653 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:39.653 16:13:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3555783 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 3555783 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # '[' -z 3555783 ']' 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:39.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:39.654 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:39.654 [2024-11-17 16:13:25.101920] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:36:39.654 [2024-11-17 16:13:25.102030] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3555783 ] 00:36:39.913 [2024-11-17 16:13:25.233311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:39.913 [2024-11-17 16:13:25.329683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:39.913 [2024-11-17 16:13:25.329692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@868 -- # return 0 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@469 -- # '[' -z rdma ']' 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:36:40.480 16:13:25 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:47.045 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:36:47.046 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:36:47.046 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:36:47.046 Found net devices under 0000:d9:00.0: mlx_0_0 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ rdma == tcp ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:36:47.046 Found net devices under 0000:d9:00.1: mlx_0_1 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # is_hw=yes 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == tcp ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@447 -- # [[ rdma == rdma ]] 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # rdma_device_init 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@529 -- # load_ib_rdma_modules 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:36:47.046 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@530 -- # allocate_nic_ips 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:36:47.305 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:47.305 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:36:47.305 altname enp217s0f0np0 00:36:47.305 altname ens818f0np0 00:36:47.305 inet 192.168.100.8/24 scope global mlx_0_0 00:36:47.305 valid_lft forever preferred_lft forever 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:36:47.305 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:36:47.305 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:36:47.305 altname enp217s0f1np1 00:36:47.305 altname ens818f1np1 00:36:47.305 inet 192.168.100.9/24 scope global mlx_0_1 00:36:47.305 valid_lft forever preferred_lft forever 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # return 0 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # [[ rdma == \r\d\m\a ]] 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # get_available_rdma_ips 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:47.305 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # RDMA_IP_LIST='192.168.100.8 00:36:47.306 192.168.100.9' 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # echo '192.168.100.8 00:36:47.306 192.168.100.9' 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # head -n 1 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # head -n 1 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # echo '192.168.100.8 00:36:47.306 192.168.100.9' 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # tail -n +2 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@486 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@487 -- # '[' -z 192.168.100.8 ']' 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == tcp ']' 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@496 -- # '[' rdma == rdma ']' 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # modprobe nvme-rdma 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:47.306 16:13:32 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:47.306 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:47.306 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:47.306 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:47.306 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:47.306 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:47.306 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:47.306 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:36:47.306 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:36:47.306 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:47.306 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:47.306 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:47.306 ' 00:36:50.590 [2024-11-17 16:13:35.410036] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x61200002b840/0x7f644800c940) succeed. 00:36:50.590 [2024-11-17 16:13:35.419913] rdma.c:2623:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x61200002b9c0/0x7f6446aa6940) succeed. 00:36:51.529 [2024-11-17 16:13:36.759390] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:36:53.535 [2024-11-17 16:13:39.006526] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:36:55.442 [2024-11-17 16:13:40.937021] rdma.c:3078:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:36:57.344 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:57.344 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:57.344 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:57.344 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:57.344 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:57.344 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:57.344 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:57.344 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:36:57.344 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:36:57.344 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:57.344 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:57.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:57.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:36:57.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:36:57.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:57.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:57.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:57.345 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:57.345 16:13:42 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:57.345 16:13:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:57.345 16:13:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:57.345 16:13:42 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:57.345 16:13:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:57.345 16:13:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:57.345 16:13:42 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:36:57.345 16:13:42 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:57.604 16:13:42 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:57.604 16:13:42 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:57.604 16:13:42 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:57.604 16:13:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:57.604 16:13:42 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:57.604 16:13:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:57.604 16:13:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:57.604 16:13:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:36:57.604 16:13:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:57.604 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:57.604 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:57.604 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:57.604 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:36:57.604 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:36:57.604 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:57.604 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:57.604 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:57.604 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:57.604 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:57.604 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:57.604 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:57.604 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:57.604 ' 00:37:04.166 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:37:04.166 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:37:04.166 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:04.166 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:37:04.166 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:37:04.166 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:37:04.166 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:37:04.166 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:37:04.167 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:37:04.167 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:37:04.167 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:37:04.167 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:37:04.167 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:37:04.167 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 3555783 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # '[' -z 3555783 ']' 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@958 -- # kill -0 3555783 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # uname 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3555783 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3555783' 00:37:04.167 killing process with pid 3555783 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@973 -- # kill 3555783 00:37:04.167 16:13:48 spdkcli_nvmf_rdma -- common/autotest_common.sh@978 -- # wait 3555783 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:37:04.732 rmmod nvme_rdma 00:37:04.732 rmmod nvme_fabrics 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- nvmf/common.sh@523 -- # [[ rdma == \t\c\p ]] 00:37:04.732 00:37:04.732 real 0m25.372s 00:37:04.732 user 0m52.971s 00:37:04.732 sys 0m6.297s 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:04.732 16:13:50 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:37:04.732 ************************************ 00:37:04.732 END TEST spdkcli_nvmf_rdma 00:37:04.732 ************************************ 00:37:04.732 16:13:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:04.732 16:13:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:04.732 16:13:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:04.732 16:13:50 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:04.732 16:13:50 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:04.732 16:13:50 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:04.732 16:13:50 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:04.732 16:13:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:04.732 16:13:50 -- common/autotest_common.sh@10 -- # set +x 00:37:04.732 16:13:50 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:04.732 16:13:50 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:04.732 16:13:50 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:04.732 16:13:50 -- common/autotest_common.sh@10 -- # set +x 00:37:11.296 INFO: APP EXITING 00:37:11.296 INFO: killing all VMs 00:37:11.296 INFO: killing vhost app 00:37:11.296 INFO: EXIT DONE 00:37:13.840 Waiting for block devices as requested 00:37:13.840 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:13.840 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:13.840 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:13.840 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:14.098 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:14.098 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:14.098 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:14.357 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:14.357 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:14.357 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:14.616 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:14.616 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:14.616 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:14.875 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:14.875 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:14.875 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:15.133 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:37:18.417 Cleaning 00:37:18.417 Removing: /var/run/dpdk/spdk0/config 00:37:18.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:18.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:18.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:18.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:18.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:18.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:18.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:18.417 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:18.417 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:18.417 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:18.417 Removing: /var/run/dpdk/spdk1/config 00:37:18.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:18.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:18.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:18.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:18.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:18.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:18.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:18.417 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:18.417 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:18.417 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:18.417 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:18.417 Removing: /var/run/dpdk/spdk2/config 00:37:18.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:18.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:18.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:18.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:18.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:18.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:18.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:18.417 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:18.417 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:18.417 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:18.417 Removing: /var/run/dpdk/spdk3/config 00:37:18.417 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:18.417 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:18.417 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:18.417 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:18.417 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:18.417 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:18.417 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:18.417 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:18.417 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:18.417 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:18.417 Removing: /var/run/dpdk/spdk4/config 00:37:18.417 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:18.417 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:18.417 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:18.417 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:18.417 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:18.417 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:18.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:18.676 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:18.676 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:18.676 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:18.676 Removing: /dev/shm/bdevperf_trace.pid3173660 00:37:18.676 Removing: /dev/shm/bdev_svc_trace.1 00:37:18.676 Removing: /dev/shm/nvmf_trace.0 00:37:18.676 Removing: /dev/shm/spdk_tgt_trace.pid3116744 00:37:18.676 Removing: /var/run/dpdk/spdk0 00:37:18.676 Removing: /var/run/dpdk/spdk1 00:37:18.676 Removing: /var/run/dpdk/spdk2 00:37:18.676 Removing: /var/run/dpdk/spdk3 00:37:18.676 Removing: /var/run/dpdk/spdk4 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3112432 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3114156 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3116744 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3117787 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3119104 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3119694 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3121080 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3121247 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3122018 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3127323 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3129400 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3130253 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3130870 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3131730 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3132341 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3132630 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3132924 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3133317 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3134362 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3137880 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3138619 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3139230 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3139461 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3141113 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3141377 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3143013 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3143261 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3143854 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3144120 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3144684 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3144825 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3146399 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3146684 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3147023 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3151466 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3156247 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3167093 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3168377 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3173660 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3173944 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3178745 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3184913 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3187908 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3199190 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3226048 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3230343 00:37:18.676 Removing: /var/run/dpdk/spdk_pid3328151 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3333844 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3339756 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3350064 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3382306 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3387513 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3430599 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3432433 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3434319 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3436074 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3441135 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3447970 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3455662 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3456801 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3458339 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3459452 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3459935 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3464968 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3464972 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3469804 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3470372 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3471070 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3471898 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3471934 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3474347 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3476267 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3478161 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3480119 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3482004 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3483879 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3490268 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3490921 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3493290 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3495297 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3502983 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3505871 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3511862 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3522231 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3522238 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3543130 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3543593 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3549958 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3550365 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3552444 00:37:18.934 Removing: /var/run/dpdk/spdk_pid3555783 00:37:18.934 Clean 00:37:19.191 16:14:04 -- common/autotest_common.sh@1453 -- # return 0 00:37:19.191 16:14:04 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:19.191 16:14:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:19.191 16:14:04 -- common/autotest_common.sh@10 -- # set +x 00:37:19.191 16:14:04 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:19.191 16:14:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:19.191 16:14:04 -- common/autotest_common.sh@10 -- # set +x 00:37:19.191 16:14:04 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:37:19.191 16:14:04 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:37:19.191 16:14:04 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:37:19.191 16:14:04 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:19.191 16:14:04 -- spdk/autotest.sh@398 -- # hostname 00:37:19.191 16:14:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:37:19.449 geninfo: WARNING: invalid characters removed from testname! 00:37:41.370 16:14:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:41.938 16:14:27 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:43.851 16:14:29 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:45.759 16:14:30 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:47.134 16:14:32 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:49.035 16:14:34 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:37:50.410 16:14:35 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:50.410 16:14:35 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:50.410 16:14:35 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt ]] 00:37:50.410 16:14:35 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:50.410 16:14:35 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:50.410 16:14:35 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:37:50.669 + [[ -n 3032458 ]] 00:37:50.669 + sudo kill 3032458 00:37:50.678 [Pipeline] } 00:37:50.696 [Pipeline] // stage 00:37:50.701 [Pipeline] } 00:37:50.718 [Pipeline] // timeout 00:37:50.723 [Pipeline] } 00:37:50.739 [Pipeline] // catchError 00:37:50.745 [Pipeline] } 00:37:50.763 [Pipeline] // wrap 00:37:50.769 [Pipeline] } 00:37:50.784 [Pipeline] // catchError 00:37:50.795 [Pipeline] stage 00:37:50.798 [Pipeline] { (Epilogue) 00:37:50.813 [Pipeline] catchError 00:37:50.815 [Pipeline] { 00:37:50.829 [Pipeline] echo 00:37:50.831 Cleanup processes 00:37:50.838 [Pipeline] sh 00:37:51.128 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:37:51.128 3575901 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:37:51.178 [Pipeline] sh 00:37:51.514 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:37:51.514 ++ grep -v 'sudo pgrep' 00:37:51.514 ++ awk '{print $1}' 00:37:51.514 + sudo kill -9 00:37:51.514 + true 00:37:51.527 [Pipeline] sh 00:37:51.809 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:51.809 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:37:58.365 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:38:01.660 [Pipeline] sh 00:38:01.945 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:01.945 Artifacts sizes are good 00:38:01.960 [Pipeline] archiveArtifacts 00:38:01.967 Archiving artifacts 00:38:02.148 [Pipeline] sh 00:38:02.425 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:38:02.438 [Pipeline] cleanWs 00:38:02.447 [WS-CLEANUP] Deleting project workspace... 00:38:02.447 [WS-CLEANUP] Deferred wipeout is used... 00:38:02.453 [WS-CLEANUP] done 00:38:02.454 [Pipeline] } 00:38:02.469 [Pipeline] // catchError 00:38:02.479 [Pipeline] sh 00:38:02.758 + logger -p user.info -t JENKINS-CI 00:38:02.767 [Pipeline] } 00:38:02.780 [Pipeline] // stage 00:38:02.785 [Pipeline] } 00:38:02.799 [Pipeline] // node 00:38:02.805 [Pipeline] End of Pipeline 00:38:02.843 Finished: SUCCESS